Abstract: Disclosed is a system (102) to determine the attentiveness of a user. A receiving module (214) for receiving a video from a camera, a region of restriction, a region of interest within the region of restriction and a location of the camera with respect to a display screen. An identifying module (216) for identifying an eye gaze of a user based on the video and a primary image processing methodology. A computing module (218) for computing a location of the eye gaze in the region of restriction using a secondary image processing methodology A generating module (220) for generating an attentiveness of the user associated with the region of interest based on tracking of the eye gaze and a corresponding location of the eye gaze in the region of restriction.
PRIORITY INFORMATION
[001] This patent application does not claim priority from any application.
TECHNICAL FIELD
[002] The present subject matter described herein, in general, relates to determining an attentiveness of a user and more particularly to determining attentiveness of a user on a display screen.
BACKGROUND
[003] In today’s era, people frequently access or edit online content. The online content includes, but not limited to blogs, longform contents, case studies, white papers, eBooks, infographics, templates, checklists, videos. The online content may be accessed through a user device having a display screen. The online content may be present in some layout on the display screen. A person while accessing the user device may gaze at the online content in the display screen.
[004] The person may gaze at the layout of the display screen in a particular manner. The person for example, in one instance, may read certain paragraphs of the blog and ignore the remaining paragraphs of the blog. The person may, in another instance, see certain advertisements in the layout of the online content.
SUMMARY
[005] Before the present systems and methods to determine attentiveness of the user, are described, it is to be understood that this application is not limited to the particular systems, and methodologies described, as there can be multiple possible embodiments to determine attentiveness of the user which are not expressly illustrated in the present disclosure. It is also to be understood that the terminology used in the description is for the purpose of describing the particular versions or embodiments to determine the attentiveness of the user only and is not intended to limit the scope of the present application. This summary is provided to introduce concepts related to systems and methods for determining attentiveness of a user and the concepts are further described below in the detailed description. This summary is not intended to identify essential features of the claimed subject matter nor is it intended for use in determining or limiting the scope of the claimed subject matter.
3
[006] In one implementation, a method for determining the attentiveness of a user is disclosed. The steps may include, receiving a video from a camera, a region of restriction and a region of interest within the region of restriction. The region of restriction may indicate an area less than or equal to a dimension of the display screen. After receiving, a primary image processing methodology may be used to identify an eye gaze of a user based on the video. After identifying the eye gaze, a location of the eye gaze may be computed in the region of restriction. The location of the eye may be computed employing a secondary image processing methodology. After computing the location of the eye gaze, an attentiveness of the user may be generated. The attentiveness of the user may be based on tracking of the eye gaze and a corresponding location of the eye gaze in the region of restriction. The attentiveness of the user may be associated with the region of interest. The attentiveness of the user may be indicative of at least one of a distance traversed by the eye gaze in the region of interest and a time spent by the eye gaze in the region of interest.
[007] In another implementation, a system for determining attentiveness of the user is disclosed. The system comprises a receiving module, an identifying module, a computing module and a generating module. The receiving module may receive a video from a camera, a region of restriction and a region of interest within the region of restriction. The region of restriction may be less than or equal to a dimension of the display screen. Further, the identifying module, may identify an eye gaze of a user based on the video and a primary image processing methodology. After identifying the gaze, the computing module may compute a location of the eye gaze in the region of restriction using a secondary image processing methodology. After computing the location of the eye gaze in the region of restriction, a generation module may generate an attentiveness of the user. The generation module may generate an attentiveness of the user based on tracking of the eye gaze and the location of the eye gaze in the region of restriction. The attentiveness of the user may be associated with the region of interest. The attentiveness may be indicative of at least one of a distance traversed by the eye gaze in the region of restriction and a time spent in the region of interest.
[008] In yet another implementation, non-transitory computer readable medium embodying a program executable in a computing device for determining attentiveness of the user is disclosed. The program code may comprise, receiving, a video from a camera, a region of restriction and a region of interest within the region of restriction. The region of restriction may indicate an area less than or equal to a dimension of the display screen. After receiving, the program code may use a primary image processing methodology to identify an
4
eye gaze of a user based on the video. After identifying the eye gaze the program code may further compute, a location of the eye gaze in the region of restriction using a secondary image processing methodology. After computing the location of the eye gaze in the region of restriction, the program code may, further, generate the attentiveness of the user. The attentiveness of the user may be based on tracking of the eye gaze and the location of the eye gaze in the region of restriction. The attentiveness of the user may be associated with the region of interest. The attentiveness of the user may, further, be indicative of at least one of a distance traversed by the eye gaze in the region of interest and a time spent by the eye gaze in the region of interest.
BRIEF DESCRIPTION OF THE DRAWINGS
[009] The foregoing detailed description of embodiments is better understood when read in conjunction with the appended drawings. For the purpose of illustrating the disclosure, example constructions of the disclosure are shown in the present document; however, the disclosure is not limited to the specific methods and apparatus to determine attentiveness of a user disclosed in the document and the drawings.
[0010] The detailed description is given with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the drawings to refer like features and components.
[0011] Figure 1 illustrates a network implementation of a system for determining an attentiveness of a user, in accordance with an embodiment of the present subject matter.
[0012] Figure 2 illustrates a hardware implementation of a system for determining an attentiveness of a user, in accordance with an embodiment of the present subject matter.
[0013] Figure 3 illustrates a method for determining an attentiveness of a user using a system, in accordance with an embodiment of the present subject matter.
DETAILED DESCRIPTION
[0014] Some embodiments of this disclosure, illustrating all its features, will now be discussed in detail. The words "receiving", "identifying", “computing”, "generating" and other forms thereof, are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items or meant to be limited to only the listed item or items. It must also be
5
noted that as used herein and in the appended claims, the singular forms "a," "an," and "the" include plural references unless the context clearly dictates otherwise. Although any systems and methods similar or equivalent to those determining attentiveness of a user described herein can be used in the practice or testing of embodiments of the present disclosure, the exemplary, systems and methods are now described. The disclosed embodiments are merely exemplary of the disclosure, which may be embodied in various forms.
[0015] Various modifications to the embodiment of determining attentiveness of the user will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure is not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0016] While aspects of described system and method for determining attentiveness of a user may be implemented in any number of different computing systems, environments, and/or configurations, the embodiments are described in the context of the following exemplary system for determining attentiveness of a user.
[0017] The present disclosure describes a system and method for determining the attentiveness of a user. To determine attentiveness, initially, a video from a camera, a region of restriction and a region of interest within the region of restriction may be received. The region of restriction may indicate an area less than or equal to a dimension of the display screen. After receiving, a primary image processing methodology may be used to identify an eye gaze of a user based on the video.
[0018] After identifying the eye gaze, a location of the eye gaze in the region of restriction may be computed. The computing of location of eye gaze may be based on a secondary image processing methodology. After computing the location of the eye gaze, an attentiveness of the user may be generated. The attentiveness of the user may be associated with the region of interest. The attentiveness of the user may be based on tracking of the eye gaze and the location of the eye gaze in the region of restriction. The attentiveness of the user may be indicative of at least one of a distance traversed by the eye gaze in the region of interest and a time spent by the eye gaze in the region of interest.
Figure 1 description:
[0019] Referring now to Figure 1, a network implementation 100 of a system 102 for determining attentiveness of a user is disclosed. Although determining attentiveness of a user
6
is explained considering that the system 102 to determine attentiveness of a user is implemented on a server, it may be understood that the system 102 may also be implemented in a variety of computing systems, such as a laptop computer, a desktop computer, a notebook, a workstation, a mainframe computer, a server, a network server, embedded hardware platform board, reprogrammable device platform and the like. In one implementation, the system 102 may be implemented over a cloud network. Further, it will be understood that the system 102 may access multiple environments of one or more 104.1, 104.2…104.N, collectively referred to as environment 104. Each environment 104 from the multiple environments 104.1 to 104.N, comprises a user device 108.1 to 108.N (hereinafter collectively referred to as 108), a camera 105.1 to 105.N (hereinafter collectively referred to as camera 105) and a user 107.1 to 107.N (hereinafter collectively referred to as user 107). In one example, the user devices 108 may be one of a computer, a laptop and the like. The user device 108 and the camera 105 may be communicatively coupled to system 102 through the network 106. The user 107 accesses the user device 108. Further, the camera 105 records a video of the user 107. Examples of the user device 108 may include, but are not limited to, a portable computer, a personal digital assistant, a handheld device, and a workstation.
[0020] In one implementation, the network 106 may be a wireless network, a wired network or a combination thereof. The network 106 may be implemented as one of the different types of networks, such as intranet, local area network (LAN), wide area network (WAN), the internet, and the like. The network 106 may either be a dedicated network or a shared network. The shared network represents an association of the different types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), Wireless Application Protocol (WAP), and the like, to communicate with one another. Further, the network 106 may include a variety of network devices, including routers, bridges, servers, computing devices, storage devices, and the like.
[0021] In one embodiment, the system 102 may receive a video from a camera 105, a region of restriction and a region of interest within the region of restriction from a user device 108 via a network 106. The display screen may indicate a display of the user device 108. The region of restriction may indicate the dimensions of the display screen.
[0022] The system 102 after receiving the video from the camera 105, may identify an eye gaze of the user by applying a primary image processing methodology on the video. The system may apply the primary image processing methodology on the video to identify a
7
pupil from a set of pupils of the user 107 in the video. After identifying the pupil, the system 102 may further mask the identified pupil. After masking the pupil, the system 102 may compute a centroid point on the eye gaze of the user, wherein the centroid point indicates the eye gaze of the user.
[0023] The system 102 after identifying the eye gaze may compute a location of the eye gaze in the region of restriction using a secondary image processing methodology. The system 102 further generates an attentiveness of the user associated with the region of interest based on tracking the eye gaze and the location of the eye gaze in the region of restriction.
[0024] To detect the location of the eye gaze in the region of restriction the secondary image processing methodology may receive a location of the camera 105 with respect to display screen of the user device 108. The secondary image processing methodology may detect a distance between the eye gaze and the camera 105 based on the video. The computed distance may be generated using various facial depth estimation techniques employed by the secondary image processing technology. The secondary image processing methodology may further identify the direction of the eye gaze with respect to the camera 105. The direction of the eye gaze may indicate an angular displacement of the eye gaze with respect to the camera 105. The identifying of the location of the eye gaze in the region of restriction may be based on, the distance between the eye gaze and the camera, the direction of the eye gaze and the location of the camera 105 with respect to the display screen of the user device 108.
[0025] Further, the system 102 may track the eye gaze movement in the region of restriction by computing a ratio. The ratio may indicate movement of eye gaze in the video with respect to a distance traversed by the eye gaze in the region of restriction. The computing of the ratio may be based on the location of the camera 105 with respect to display screen of the user device 108 and the location of the eye gaze on the display screen. The tracking of the eye gaze in the region of restriction may be based on the ratio and the location of the eye gaze in the region of restriction.
[0026] The attentiveness may be indicative of at least one of a distance traversed by the eye gaze in the region of interest and a time spent by the eye gaze in the region of interest. The system 102 may further store the attentiveness of the user with respect to any of the user device 108.
Figure 2 description:
[0027] Referring now to Figure 2, a hardware implementation of a system 102 for determining an attentiveness of a user is disclosed. Referring now to Figure 2, the system 102
8
is illustrated in accordance with an embodiment of the present subject matter. In one embodiment, the system 102 to determine attentiveness of a user may include at least one processor 202, an input/output (I/O) interface 204, and a memory 206. The at least one processor 202 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the at least one processor 202 is configured to fetch and execute computer-readable instructions stored in the memory 206.
[0028] The I/O interface 204 may include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like. The I/O interface 204 may allow the system 102 to interact with the user directly or through the user devices 104. Further, the I/O interface 204 may enable the system 102 to communicate with other computing devices, such as web servers and external data servers (not shown). The I/O interface 204 can facilitate multiple communications within a wide variety of networks and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. The I/O interface 204 may include one or more ports for connecting a number of devices to one another or to another server.
[0029] The memory 206 may include any computer-readable medium or computer program product known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. The memory 206 may include modules 208 and data 210.
[0030] The modules 208 include routines, programs, objects, components, data structures, etc., which perform particular tasks or implement particular abstract data types. In one implementation, the modules 208 may include a receiving module 214, an identifying module 216, a computing module 218, a generating module 220 and other modules 222. The other modules 222 may include programs or coded instructions that supplement applications and functions of the system 102. The modules 208 described herein may be implemented as software modules that may be executed in the cloud-based computing environment of the system 102.
[0031] The data 210, amongst other things, serves as a repository for storing data processed, received, and generated by one or more of the modules 208. The data 210 may
9
also include a system database 224 and other data 226. The other data 226 may include data generated as a result of the execution of one or more modules in the other modules 222.
[0032] The lack of technology necessitates the requirement of system 102 to determine the attentiveness of the user. More particularly, the attentiveness of the user on a display screen of a device that a user may accesses. The system 102 may register a user from the I/O interface 204 to use the system 102. The system may, further, receive inputs from a camera and a display screen of a user device, through the I/O interface 204. In one aspect, the user may access the I/O interface 204 of the system 102. The system 102 may employ the receiving module 214, the identifying module 216, the computing module 218 and the generating module 220 to determine the attentiveness of the user.
Receiving module 214:
[0033] The receiving module 214, may receive a video from a camera, a region of restriction and a region of interest within a region of restriction from the interface 204. The region of restriction may be less than or equal to a dimension of the display screen. In one aspect, the camera may be operatively connected to the display screen. For example, if the user accesses his laptop, the display screen indicates the display screen of the laptop. The camera here may be an inbuilt camera of the laptop, that is operatively connected to the display screen. The region of restriction may indicate the dimension of the display screen of the laptop. Further, the region of interest may indicate layout of the online content on the screen of the laptop.
Identifying Module 216:
[0034] The identifying module 216 may identify an eye gaze of the user based on the video received by the receiving module 214 and a primary image processing methodology. The primary image processing methodology may identify a pupil from the set of pupils in the video. The identified pupil may be masked to generate a masked pupil. The masking of the pupil may be based on an image threshold filter. The image threshold filter may mark the points surrounding the pupil to further mask the pupil. After masking the pupil, a centroid point for the pupil may be computed. The centroid point may indicate the eye gaze of the user.
[0035] In one embodiment, the primary image processing technology may detect the face of the user using deep learning neural networks applying feature based, appearance based, knowledge based and template matching techniques. The deep learning neural networks may locate prominent feature points on the face. The primary image processing
10
technology may locate points around the set of eyes. After the points around the set of eyes are detected the eyes may be masked. After masking the eyes, image threshold filter that detects the pupils, assists the primary image processing methodology to mask the set of pupils. After the set of pupils are masked, a set of masked pupils may be generated. The primary image processing technology may use the masked pupils to compute a set of centroid points for the set of pupils. Further, the set of centroid points may be used to compute a common centroid for the set of centroid points. The common centroid lies in the middle of the shortest distance between the set of centroid points. The common centroid indicates the eye gaze of the user in the video.
Computing module 218:
[0036] The computing module 218 may use the eye gaze identified by the identifying module 216 to compute the location of the eye gaze in the region of restriction. The computing module 218 may use the secondary image processing methodology to locate the eye gaze. To identify the location, the secondary image processing methodology may receive, the location of the camera with respect to the display screen. Further, the secondary image processing methodology compute a distance between the camera and the eye gaze based on the video. Furthermore, the secondary image processing methodology may identify a direction of the eye gaze with respect to the camera. The computing module 218 may, based on, the distance between the camera and the eye gaze, the direction of the eye gaze and the location of the camera with respect to the display screen, compute the location of the eye gaze.
Generating Module 220:
[0037] The generating module 220 may generate the attentiveness of the user associated with the region of interest based on the location of the eye gaze computed by the computing module 218 and tracking of the eye gaze in the region of restriction. The generating module 220 may track the eye gaze movement in the region of restriction by computing a ratio. The ratio may indicate movement of eye gaze in the video with respect to a distance traversed by the eye gaze in the region of restriction. The tracking of the eye gaze in the region of restriction may be based on the ratio and the corresponding location of the eye gaze in the region of restriction. The attentiveness of the user indicates at least one of a distance traversed by the eye gaze in the region of interest and a time spent in the region of interest.
Example:
11
[0038] To elaborate on determining attentiveness of the user with an example, consider a user accessing an online blog from a laptop. The system 102 receives the video from the inbuilt camera of the laptop. The video comprises the face of the user. The region of restriction in this case is the dimension of the screen display. The region of interest may be the area on the display screen that displays the blog content. The blog content may include but not limited to written text, videos, advertisements. As the user reads through the blog, the set of eyes of the user may move in a particular way in the region of restriction while reading through the blog. The system 102 may track the movement of the eye gaze of the user. After tracking the eye gaze of the user, the system 102 may be able to determine the attentiveness of the user with respect to the content of the blog. The attentiveness may be indicative of at least one of the distances traversed by the eye gaze of the user in the region of interest and the time the eye gaze of the user spent in the region of interest.
[0039] The system 102 generates graphs of analytic data to indicate the attentiveness of the user with respect to the blog content. The attentiveness of the user may be useful for the content provider of the blog. The content provider may be able to understand the relevance of the content for the user. Further based on the attentiveness, the content provider may understand the placement of the advertisements in the region of restriction.
Figure 3 description:
[0040] Referring now to Figure 3, a method 300 for determining an attentiveness of a user is shown, in accordance with an embodiment of the present subject matter. The method 300 may be described in the general context of computer executable instructions. Generally, computer executable instructions can include routines, programs, objects, components, data structures, procedures, modules, functions, etc., that perform functions or implement particular abstract data types. The method 300 may also be practiced in a distributed computing environment where functions are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, computer executable instructions may be located in both local and remote computer storage media, including memory storage devices.
[0041] The order in which the method 300 is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method 300 or alternate methods. Additionally, individual blocks may be deleted from the method 300 without departing from the spirit and scope of the subject matter described herein. Furthermore, the method can be implemented in any suitable hardware,
12
software, firmware, or combination thereof. However, for ease of explanation, in the embodiments described below, the method 300 may be implemented as described in the system 102.
[0042] At block 302, a video from a camera, a region of restriction, a region of interest within the region of restriction and a location of camera with respect to display may be received. In one implementation, a video from a camera, a region of restriction, a region of interest within the region of restriction and a location of camera with respect to display may be received by a receiving module 214 and linked to system database 224.
[0043] At block 304, an eye gaze of a user may be identified based on the video and a primary image processing methodology. In one implementation, the eye gaze may be identified by an identifying module 216 and stored to system database 224.
[0044] At block 306, a location of the eye gaze in the region of restriction may be computed based on a secondary image processing methodology. In one implementation, the location of the eye gaze may be computed by a computing module 218 and tied to system database 224.
[0045] At block 308, the eye gaze of the user may be generated based on tracking of the eye gaze and corresponding location of the eye gaze in the region of restriction. In one implementation, the eye gaze of the user may be generated by the generating module 220 and stored to system database 224.
[0046] Exemplary embodiments discussed above may provide certain advantages. Though not required to practice aspects of the disclosure, these advantages may include those provided by the following features.
[0047] Some embodiments enable a system and a method to detect the areas of ignorance in the region of restriction. In other words, detect the areas in the region of restriction that is not traversed by the eye gaze.
[0048] Some embodiments enable a system and a method to enable to detect the attentiveness of the user when the user is wearing spectacles.
[0049] Although implementations for methods and systems for determining an attentiveness of a user have been described in language specific to structural features and/or methods, it is to be understood that the appended claims are not necessarily limited to the specific features or methods described. Rather, the specific features and methods are
13
disclosed as examples of implementations for determining an attentiveness of a user using the system.
WE CLAIM:
1. A method for determining attentiveness of a user, the method comprising the steps of:
receiving, by the processor, a video from a camera, a region of restriction and a region of interest within the region of restriction, wherein the region of restriction is less than or equal to a dimension of the display screen;
identifying, by the processor, an eye gaze of a user based on the video and a primary image processing methodology;
computing, by the processor, a location of the eye gaze in the region of restriction using a secondary image processing methodology; and
generating, by the processor, an attentiveness of the user associated with the region of interest based on tracking of the eye gaze and the location of the eye gaze in the region of restriction wherein the attentiveness is indicative of at least one of a distance traversed by the eye gaze in the region of interest and a time spent in the region of interest.
2. The method as claimed in claim 1, wherein the primary image processing methodology comprises:
identifying a pupil from a set of pupils in the video;
masking the pupil from the set of pupils to generate a masked pupil; and
computing a centroid point on the masked pupil, wherein the centroid point indicates the eye gaze of the user.
3. The method as claimed in claim 1, wherein the secondary image processing methodology comprises
receiving a location of the camera with respect to the display screen;
computing a distance between the camera and the eye gaze of the user based on the video;
identifying a direction of the eye gaze with respect to the camera; and
computing the location of the eye gaze on the display screen based on the distance, the direction of the eye gaze and the location of the camera with respect to the display screen.
4. The method as claimed in claim 1, further comprising:
receiving a location of camera with respect to a display screen;
15
computing a ratio of a movement of the eye gaze in the video and a distance traversed by the eye gaze in the region of restriction based on, the location of the camera with respect to display screen and the location of the eye gaze on the display screen; and
tracking the eye gaze in the region of restriction based on the ratio and the location of the eye gaze in the region of restriction.
5. A system (102) for determining attentiveness of a user, the system comprising:
a receiving module (214), for receiving, a video from a camera, a region of restriction and a region of interest within the region of restriction, wherein the region of restriction is less than or equal to a dimension of the display screen;
an identifying module (216), for identifying, an eye gaze of a user based on the video and a primary image processing methodology;
a computing module (218), for computing, a location of the eye gaze in the region of restriction using a secondary image processing methodology; and
a generating module (220), for generating, an attentiveness of the user associated with the region of interest based on tracking of the eye gaze and the location of the eye gaze in the region of restriction wherein the attentiveness is indicative of at least one of a distance traversed by the eye gaze in the region of interest and a time spent in the region of interest.
6. The system (102) as claimed in claim 5, wherein the primary image processing methodology comprises:
identifying a pupil from a set of pupils in the video;
masking the pupil from the set of pupils to generate a masked pupil; and
computing a centroid point on the masked pupil, wherein the centroid point indicates the eye gaze of the user.
7. The system (102) as claimed in claim 5, wherein secondary image processing methodology comprises:
receiving a location of the camera with respect to the display screen;
computing a distance between the camera and the eye gaze of the user based on the video;
identifying a direction of the eye gaze with respect to the camera, computing the location of the eye gaze on the display screen based on the distance, the direction of the eye gaze and the location of the camera with respect to the display screen.
16
8. The system (102) as claimed in claim 5, further comprising:
receiving a location of camera with respect to a display screen;
computing a ratio of a movement of the eye gaze in the video and a distance traversed by the eye gaze in the region of restriction based on, the location of the camera with respect to display screen and the location of the eye gaze on the display screen; and
tracking the eye gaze in the region of restriction based on the ratio and the location of the eye gaze in the region of restriction.
9. The system (102) as claimed in claim 4, wherein the tracking of the eye gaze further comprises:
computing a ratio of a movement of the eye gaze in the video and a distance traversed by the eye gaze in the region of restriction based on, the location of the camera with respect to display and the corresponding location of the eye gaze on the display screen; and
tracking the eye gaze in the region of restriction based on the ratio and the corresponding location of the eye gaze in the region of restriction.
10. A non-transitory computer readable medium embodying a program executable in a computing device for determining attentiveness of a user, the program comprising a program code for:
receiving, a video from a camera, a region of restriction and a region of interest within the region of restriction, wherein the region of restriction is less than or equal to a dimension of the display screen;
identifying, an eye gaze of a user based on the video and a primary image processing methodology;
computing a location of the eye gaze in the region of restriction using a secondary image processing methodology; and
generating, an attentiveness of the user associated with the region of interest based on tracking of the eye gaze and a corresponding location of the eye gaze in the region of restriction wherein the attentiveness is indicative of at least one of a distance traversed by the eye gaze in the region of interest and a time spent in the region of interest.
| # | Name | Date |
|---|---|---|
| 1 | 201911004373-STATEMENT OF UNDERTAKING (FORM 3) [04-02-2019(online)].pdf | 2019-02-04 |
| 2 | 201911004373-REQUEST FOR EXAMINATION (FORM-18) [04-02-2019(online)].pdf | 2019-02-04 |
| 3 | 201911004373-REQUEST FOR EARLY PUBLICATION(FORM-9) [04-02-2019(online)].pdf | 2019-02-04 |
| 4 | 201911004373-POWER OF AUTHORITY [04-02-2019(online)].pdf | 2019-02-04 |
| 5 | 201911004373-FORM-9 [04-02-2019(online)].pdf | 2019-02-04 |
| 6 | 201911004373-FORM 18 [04-02-2019(online)].pdf | 2019-02-04 |
| 7 | 201911004373-FORM 1 [04-02-2019(online)].pdf | 2019-02-04 |
| 8 | 201911004373-FIGURE OF ABSTRACT [04-02-2019(online)].jpg | 2019-02-04 |
| 9 | 201911004373-DRAWINGS [04-02-2019(online)].pdf | 2019-02-04 |
| 10 | 201911004373-COMPLETE SPECIFICATION [04-02-2019(online)].pdf | 2019-02-04 |
| 11 | abstract.jpg | 2019-03-12 |
| 12 | 201911004373-Proof of Right (MANDATORY) [26-03-2019(online)].pdf | 2019-03-26 |
| 13 | 201911004373-OTHERS-030419.pdf | 2019-04-09 |
| 14 | 201911004373-Correspondence-030419.pdf | 2019-04-09 |
| 15 | 201911004373-FER.pdf | 2020-08-18 |
| 16 | 201911004373-OTHERS [18-02-2021(online)].pdf | 2021-02-18 |
| 17 | 201911004373-FER_SER_REPLY [18-02-2021(online)].pdf | 2021-02-18 |
| 18 | 201911004373-COMPLETE SPECIFICATION [18-02-2021(online)].pdf | 2021-02-18 |
| 19 | 201911004373-CLAIMS [18-02-2021(online)].pdf | 2021-02-18 |
| 20 | 201911004373-POA [09-07-2021(online)].pdf | 2021-07-09 |
| 21 | 201911004373-FORM 13 [09-07-2021(online)].pdf | 2021-07-09 |
| 22 | 201911004373-Proof of Right [13-10-2021(online)].pdf | 2021-10-13 |
| 23 | 201911004373-US(14)-HearingNotice-(HearingDate-27-09-2023).pdf | 2023-08-29 |
| 24 | 201911004373-Correspondence to notify the Controller [08-09-2023(online)].pdf | 2023-09-08 |
| 25 | 201911004373-Written submissions and relevant documents [10-10-2023(online)].pdf | 2023-10-10 |
| 26 | 201911004373-PatentCertificate20-11-2023.pdf | 2023-11-20 |
| 27 | 201911004373-IntimationOfGrant20-11-2023.pdf | 2023-11-20 |
| 1 | STRATEGYE_03-06-2020.pdf |
| 2 | STRATEGYAE_17-03-2021.pdf |