Abstract: Disclosed is a method and a system for evaluating two onscreen keyboards by determining cognitive scores associated with each of the two onscreen keyboards. The method comprises receiving a first set of parameters and a second set of parameter associated with a first onscreen keyboard and a second onscreen keyboard, respectively. The method further comprises determining a first cognitive score for the first onscreen keyboard using the first set of parameters. The method further comprises determining a second cognitive score for the second onscreen keyboard using the second set of parameters. The method further comprises validating the first cognitive score and the second cognitive score using an Electroencephalography (EEG) signal of the user. The EEG signal of the user is captured while the user is using the first onscreen keyboard and the second onscreen keyboard.
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See Section 10 and Rule 13)
Title of invention: EVALUATING ONSCREEN KEYBOARD
Applicant
Tata Consultancy Services Limited A company Incorporated in India under The Companies Act, 1956
Having address:
Nirmal Building, 9th Floor,
Nariman Point, Mumbai 400021.
Maharashtra. India
The following specification particularly describes the invention and the manner in which it is to be performed
TECHNICAL FIELD
[001] The present subject matter described herein, in general, relates to onscreen
keyboards, and more particularly to a system and a method for evaluating onscreen keyboards displayed on display screens, such as a Television.
BACKGROUND
[002] In a modern era of smart display devices such as, smart televisions capable of
being coupled to a modem or other electronic devices for Internet and for other activities, people tend to use the smart display devices for a variety of purposes such as. surfing Internet and play games. In order to surf Internet or play games or to perform other activities on the smart display devices, one needs an onscreen keyboard to be displayed on the smart display devices. In other words, with these new evolving functionalities there is an increased need to enable its users to input text through onscreen keyboards and remote control devices.
[003] As may be understood, a variety of onscreen keyboards with varied arrangements
of alphabets, numbers, and characters are available. However, an onscreen keyboard that is most comfortable and user friendly to the users may be presented. Therefore, there is a need to evaluate onscreen keyboards before presenting the same to the users as existing onscreen layouts are not comfortable options to use in TV.
SUMMARY
[004] This summary is provided to introduce concepts related to systems and methods
for evaluating onscreen keyboards and the concepts are further described below in the detailed description. This summary is not intended to identify essential features of the claimed subject matter nor is it intended for use in determining or limiting the scope of the claimed subject matter.
[005] In one implementation, a method for evaluating two onscreen keyboards by
determining cognitive scores associated with each of the two onscreen keyboards. The method comprises receiving a first set of parameters from a user using a first onscreen keyboard. The first set of parameters is indicative of a usability of the first onscreen keyboard. The method further comprises receiving a second set of parameters from the user using a second onscreen keyboard. The second set of parameters is indicative of a usability of the second onscreen
keyboard. The method further comprises determining a first cognitive score for the first onscreen
keyboard using the first set of parameters. The method further comprises determining a second
cognitive score for the second onscreen keyboard using the second set of parameters. The method
further comprises validating the first cognitive score and the second cognitive score using an
Electroencephalography (EEG) signal of the user. The EEG signal of the user is captured while
the user is using the first onscreen keyboard and the second onscreen keyboard.
[006] In another implementation, a system for evaluating two onscreen keyboards by
determining cognitive scores associated with each of the two onscreen keyboards is disclosed. The system comprises a processor and a memory. The memory is coupled to the processor. The processor is capable of executing a plurality of modules stored in the memory. The plurality of modules comprising a receiving module, a calculation module, and a validation module. The receiving module is configured to receive a first set of parameters from a user using a first onscreen keyboard. The first set of parameters is indicative of a usability of the first onscreen keyboard. The receiving module is further configured to receive a second set of parameters from the user using a second onscreen keyboard. The second set of parameters is indicative of a usability of the second onscreen keyboard. The calculation module is configured to determine a first cognitive score for the first onscreen keyboard using the first set of parameters. The calculation module is further configured determine a second cognitive score for the second onscreen keyboard using the second set of parameters. The validation module is configured to validate the first cognitive score and the second cognitive score using an Electroencephalography (EEG) signal of the user. The EEG signal of the user is captured while the user is using the first onscreen keyboard and the second onscreen keyboard.
[007] In yet another implementation, non-transitory computer readable medium
containing a computer program product for evaluating two onscreen keyboards by determining cognitive scores associated with each of the two onscreen keyboards is disclosed. The non-transitory computer readable medium comprises, a) a program code for receiving a first set of parameters from a user using a first onscreen keyboard, wherein the first set of parameters is indicative of a usability of the first onscreen keyboard; b) a program code for receiving a second set of parameters from the user using a second onscreen keyboard, wherein the second set of parameters is indicative of a usability of the second onscreen keyboard; c) a program code for determining a first cognitive score for the first onscreen keyboard using the first set of
parameters; d) a program code for determining a second cognitive score for the second onscreen keyboard using the second set of parameters; and e) a program code for validating the first cognitive score and the second cognitive score using an Electroencephalography (EEG) signal of the user, wherein the EEG signal of the user is captured while the user is using the first onscreen keyboard and the second onscreen keyboard.
BRIEF DESCRIPTION OF THE DRAWINGS
[008] The detailed description is described with reference to the accompanying figures.
In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the drawings to refer like features and components.
[009] Figure 1 illustrates a network implementation of a system for evaluating two
onscreen keyboards to determine cognitive scores associated with each of the two onscreen
keyboards, in accordance with an embodiment of the present subject matter.
[0010] Figure 2 illustrates the system of Figure 1, in accordance with an embodiment of
the present subject matter.
[0011] Figure 3 is a method for validating a dynamic mental operator using an EEG
signal, in accordance with an embodiment of the present subject matter.
[0012] Figures 4A, 4B, and 4C show graphs associated with validation of the dynamic
metal operator, in accordance with an embodiment of the present subject matter.
[0013] Figure 5 shows a flowchart illustrating a method for evaluating an onscreen
keyboard, in accordance with an embodiment of the present subject matter.
DETAILED DESCRIPTION
[0014] The present subject matter relates generally to evaluation of onscreen keyboards
by determining cognitive scores associated with each of the two onscreen keyboards. The onscreen keyboards may be displayed on a display screen such as. a television, a computer, a point of sale screen, and a tablet computer. Evaluating an onscreen keyboard means determining whether a layout of the onscreen keyboard is comfortable enough for users or not. The onscreen keyboard may be evaluated by determining the cognitive scores associated with them. The cognitive scores may be determined based upon a set of parameters associated with the usability
of the onscreen keyboards. The set of parameters may be associated with a Keystroke Level Model - Goals, Operators, Methods, and Selection (KLM-GOMS) model. The KLM-GOMS model assist in determining theoretical values of the cognitive scores.
[0015] After the theoretical values of the cognitive scores are determined using the KLS-
GOMS model, an EEG signal of the user may be used to validate the cognitive scores so determined. It may be understood that the onscreen keyboards may be evaluated in two scenarios. In a first scenario, the onscreen keyboards to be evaluated will not be assisted with predictive text entry, whereas in a second scenario, the onscreen keyboards to be evaluated will be assisted with predictive text entry.
[0016] In the first scenario, the cognitive scores associated with the onscreen keyboards
are determined using KLM-GOMS model and are validated using the EEG signal. After the
cognitive scores are validated, it may as certained whether to use the onscreen keyboard or not.
More is the cognitive score, more will be the cognitive load, and more uncomfortable the
onscreen keyboard will be to use. Therefore, after determining cognitive scores for several
onscreen keyboard, one may choose to use an onscreen keyboard with least cognitive score.
[0017] In the second scenario, the cognitive scores associated with the onscreen
keyboards are determined using KLM-GOMS model and are validated using the EEG signal. After the cognitive scores are validated, it may as certained whether to use the onscreen keyboard or not. More is the cognitive score, more will be the cognitive load, and more uncomfortable the onscreen keyboard will be to use. Therefore, after determining cognitive scores for several onscreen keyboard, one may choose to use an onscreen keyboard with least cognitive score. Having said that, in the second scenario, a new parameter called a dynamic metal operator may also be determined for assisting in the evaluation of the onscreen keyboard. The dynamic mental operator is indicative of a cognitive load on the user while the user is assisted with predictive text entry on the onscreen keyboard. In other words, since the dynamic mental operator indicates an amount of cognitive load which is directly proportional to non-comfort-ness of the user of the onscreen keyboard.
[0018] Therefore, it may be understood that the several onscreen keyboard layouts may
be evaluated by using the method and system proposed in the present subject matter. For example, onscreen keyboard layouts having several arrangements of alphabets, characters, and numerals may be evaluated for determining a comfort of the users for one or more of those
onscreen keyboard layouts. Based upon the comfort of a user for any particular onscreen keyboard, such a keyboard may be chosen to be displayed on a display screen of the user for his use.
[0019] While aspects of described system and method for evaluating two onscreen
keyboards by determining cognitive scores associated with each of the two onscreen keyboards
may be implemented in any number of different computing systems, environments, and/or
configurations, the embodiments are described in the context of the following exemplary system.
[0020] Referring now to Figure 1, a network implementation 100 of a system 102 for
evaluating two onscreen keyboards by determining cognitive scores associated with each of the
two onscreen keyboards is illustrated, in accordance with an embodiment of the present subject
matter. In one embodiment, the system 102 receives a first set of parameters from a user using a
first onscreen keyboard: and a second set of parameters from the user using a second onscreen
keyboard. The first set of parameters is indicative of a usability of the first onscreen keyboard,
whereas the second set of parameters is indicative of a usability of the second onscreen
keyboard. Based upon the first set of parameters, the system 102 may determine a first cognitive
score for the first onscreen keyboard. Subsequently, the system 102 may determine a second
cognitive score for the second onscreen keyboard. After the cognitive scores are determined the,
the system 102 may validate the first cognitive score and the second cognitive score using an
Electroencephalography (EEG) signal of the user. The EEG signal of the user is captured while
the user is using the first onscreen keyboard and the second onscreen keyboard.
[0021] It may be understood that after the cognitive scores are validated, the user may
take an informed decision whether to use the first onscreen keyboard or the second onscreen keyboard.
[0022] Although the present subject matter is explained considering that the system 102
is implemented as a server, it may be understood that the system 102 may also be implemented in a variety of computing systems, such as a laptop computer, a desktop computer, a notebook, a workstation, a mainframe computer, a server, a network server, and the like. It will be understood that the system 102 may be accessed by multiple users through one or more user devices 104-1, 104-2... 104-N, collectively referred to as user devices 104 hereinafter, or applications residing on the user devices 104. Examples of the user devices 104 may include, but are not limited to, a portable computer, a personal digital assistant, a handheld device, an EEG
system, a workstation, and remote such as, a television remote. The user devices 104 are communicatively coupled to the system 102 through a network 106.
[0023] In one implementation, the network 106 may be a wireless network, a wired
network or a combination thereof. The network 106 can be implemented as one of the different
types of networks, such as intranet, local area network (LAN), wide area network (WAN), the
internet, and the like. The network 106 may either be a dedicated network or a shared network.
The shared network represents an association of the different types of networks that use a variety
of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control
Protocol/Internet Protocol (TCP/IP), Wireless Application Protocol (WAP), and the like, to
communicate with one another. Further, the network 106 may include a variety of network
devices, including routers, bridges, servers, computing devices, storage devices, and the like.
[0024] Referring now to Figure 2, the system 102 is illustrated in accordance with an
embodiment of the present subject matter. In one embodiment, the system 102 may include at least one processor 202, an input/output (I/O) interface 204, and a memory 206. The at least one processor 202 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the at least one processor 202 is configured to fetch and execute computer-readable instructions stored in the memory 206.
[0025] The I/O interface 204 may include a variety of software and hardware interfaces,
for example, a web interface, a graphical user interface, and the like. The I/O interface 204 may allow the system 102 to interact with a user directly or through the client devices 104. Further, the I/O interface 204 may enable the system 102 to communicate with other computing devices, such as web servers and external data servers (not shown). The I/O interface 204 can facilitate multiple communications within a wide variety of networks and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN. cellular, or satellite. The I/O interface 204 may include one or more ports for connecting a number of devices to one another or to another server.
[0026] The memory 206 may include any computer-readable medium known in the art
including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only
memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and
magnetic tapes. The memory 206 may include modules 208 and data 210.
[0027] The modules 208 include routines, programs, objects, components, data
structures, etc., which perform particular tasks or implement particular abstract data types. In one
implementation, the modules 208 may include a receiving module 212, a calculation module
214, a validation module 216, and other modules 218. The other modules 218 may include
programs or coded instructions that supplement applications and functions of the system 102.
[0028] The data 210, amongst other things, serves as a repository for storing data
processed, received, and generated by one or more of the modules 208. The data 210 may also include and other data 220. The other data 220 may include data generated as a result of the execution of one or more modules in the other module 218.
[0029] In one implementation, at first, a user may use the user device 104 to access the
system 102 via the I/O interface 204. The user may register themselves using the I/O interface 204 in order to use the system 102. The system 102 may be used for evaluating onscreen keyboards. The onscreen keyboards are the keyboards that may be displayed on a display screen such as, a television, a computer, a point of sale screen, and a tablet computer. The onscreen keyboards may be accessed using wireless remotes.
[0030] Evaluating an onscreen keyboard means determining whether a layout of the
onscreen keyboard is comfortable enough for users or not. The layout of the onscreen keyboard
may be understood as an arrangement of alphabets, characters, numbers, and the like in form of a
keyboard displayed on a display screen such as a television. For example, a QWERTY keyboard
may be understood as a layout of an onscreen keyboard. Similarly, English alphabets arranged in
a sequence order may also be understood as a layout of an onscreen keyboard.
[0031] In the subsequent description, evaluation of two onscreen keyboards may be
explained in detail. However, it may be understood that an underlined concept to evaluate an onscreen keyboard may be extended to any number of onscreen keyboards.
The Receiving Module 212
[0032] In order to evaluate two onscreen keyboards, at first, the user may capture a first
set of parameters associated with a first onscreen keyboard; and a second set of parameters associated with a second onscreen keyboard. The first set of parameters and the second set of
parameters are indicative of a usability of the first onscreen keyboard and the second onscreen keyboard, respectively. In one example, first set of parameters and the second set of parameters may be modified versions of original parameters used.in a Keystroke Level Model - Goals, Operators, Methods, and Selection (KLM-GOMS) model. Specifically, a pointing parameter P of the KLM-GOMS model may be redefined. According to an embodiment of the present subject matter, the pointing parameter P may be redefined to indicate a sum of a time required to locate a key on the onscreen keyboard and a time required to move a focus on the key on the onscreen keyboard using a wireless remote. Therefore, it may be understood that each of the first set of parameters and the second set of parameters may include all parameters of the KLG-GOMS model except that the pointing parameter P is redefined. Table 1 shows all the parameters present in the first set of parameters and the second set of parameters.
[0033] In one example, in order to estimate the values of the pointing parameter P for
first onscreen keyboard and the second onscreen keyboard, a user study was conducted on a group of 20 people. During the user study, a tape recorded message consisting of 25 randomly selected alphabets was played. The users were instructed to focus on a particular block containing alphabets. A time taken to finish such as exercise was noted using a stop watch. To reduce an error as much as possible, an average value was taken for each user. The point parameter P for a NON-QWERTY keyboard layout (the first onscreen keyboard) was found to be 1.77 second and the pointing parameter for a QWERTY keyboard layout (the second onscreen keyboard) was found to be 2.1 seconds. It may be understood that the users may be a mix batch of computer users and non-computer users.
[0034] The first set of parameters and the second set of parameters that may be used to
ascertain the usability of the first onscreen keyboard and second onscreen keyboard are shown in Table 1 below:
Table 1: The parameters present in the first set of parameters and the second set of parameters:
Parameters Description Time in Time in
sec for the sec for
first ihe
onscreen second
keyboard onscreen
keyboard
p Time required to
find a key and
move focus on
that key 1.77 2.1
K Key or button press 0.20 0.20
H Move from mouse
to keyboard and
back 0.4 0.4
R(t) Waiting time for device to respond t t
M Mental
preparation and
thinking time 1.35 1.35
F Finger movement 0.22 0.22
[0035] The first set of parameters and the second set of parameters after being captured
by the user may be sent to the system 102. The receiving module 212 of the system 102 may receive the first set of parameters and the second set of parameters.
The Calculation Module 214
[0036] Based upon the first set of parameters, the calculation module 214 may determine
a first cognitive score for the first onscreen keyboard. Similarly, based upon the second set of parameters, the calculation module 214 may determine a second cognitive score for the second onscreen keyboard. Specifically, the calculation module 214 may add up time, shown in Column 3 of Table 1, corresponding to each parameter of the first set of parameters to determine the first cognitive score. Similarly, the calculation module 214 may add up time, shown in Column 4 of Table 1, corresponding to each parameter of the second set of parameters to determine the second cognitive score.
[0037] Based upon the first cognitive score, the calculation module 214 may determine a
first cognitive load. Similarly, based upon the second cognitive score, the calculation module 214 may determine a second cognitive load. A term cognitive load is used in cognitive psychology to illustrate the load related to executive control of working memory. Theories contend that during complex learning activities an amount of information and interactions that must be processed simultaneously can either under-load or over load a finite amount of working memory one possesses.
The Validation Module 216
[0038] After the determining the first cognitive score and the second cognitive score,
these score may be validated by the validation module 216. The validation module 216 uses EEG signal of the user, while the user is using the first onscreen keyboard and the second onscreen keyboard, to validate the first cognitive score and the second cognitive score. Specifically, the validation module 216 determines a first validation score associated the first onscreen keyboard by capturing the EEG signal while the user is using the first onscreen keyboard. Subsequently, the validation module 216 compares the first validation score with the first cognitive score to determine whether the first cognitive score is same as the first validation score, thereby validating the first cognitive score.
[0039] Similarly, the validation module 216 determines a second validation score
associated the second onscreen keyboard by capturing the EEG signal while the user is using the second onscreen keyboard without predictive text entry. Subsequently, the validation module 216 compares the second validation score with the second cognitive score to determine whether the second cognitive score is same as the second validation score, thereby validating the second cognitive score.
[0040] More details on determination of the first validation score and the second
validation score may be provided in the explanation of Figure 3.
Determining Dynamic Mental Operator
[0041] It may be understood that both the onscreen keyboards may be evaluated under
two scenarios. In a first scenario, both of the first onscreen keyboard and the second onscreen keyboard may not be assisted with predictive text entry. The user being unassisted with
predictive text entry may mean that when the user wishes to enter a word, no suggestions on that word may be provided to the user for selection as and when the user types using a wireless remote. However, in a second scenario, both of the first onscreen keyboard and the second onscreen keyboard may be assisted with predictive text entry. The user being assisted with predictive text entry may mean that when the user wishes to enter a word, one or more suggestions on that word may be provided to the user for selection as and when the user types using a wireless remote.
[0042] In the first scenario, the cognitive scores associated with the onscreen keyboards
are determined using the values shown in Table 1. Subsequently, the cognitive scores are validated using the EEG signal as explained above. After the cognitive scores are validated, it may be ascertained whether to use the onscreen keyboard or not. More is the cognitive score, more will be the cognitive load, and more uncomfortable the onscreen keyboard will be to use. Therefore, after determining cognitive scores for several onscreen keyboards, one may choose to use an onscreen keyboard with least cognitive score.
[0043] However, in the second scenario, the cognitive scores associated with the
onscreen keyboards are determined using the values shown in Table 1. Subsequently, the cognitive scores are validated using the EEG signal as explained above. Further, in the second scenario, a new parameter called a dynamic metal operator may also be determined for assisting in the evaluation of the onscreen keyboards. The dynamic mental operator is indicative of an additional cognitive load on the user while the user is assisted with predictive text entry on the onscreen keyboard. In other words, the dynamic mental operator indicates an amount of additional cognitive load which is directly proportional to non-comfort-ness of the user of the onscreen keyboard.
[0044] The dynamic mental operator for the first onscreen keyboard and the second
onscreen keyboard may be determined by the calculation module 214. In one embodiment, the calculation module 214 may use the KLM-GOMS model to determine the dynamic mental operator for each of the first onscreen keyboard and the second onscreen keyboard. In this embodiment, in order to determine the dynamic mental operator, each of the first set of parameters and the second set of parameters may be arranged one by one in equations similar to equations of the KLM-GOMS model.
[0045] The KLM-GOMS equation for traditional text entry of a given phrase for the first
onscreen keyboard may be written as:
T = Th + w(ktSk + dTm) (1)
Where,
Th - homing time
Tk= time for button press
kt = average number of key presses per word
w = number of words typed
For predictive text entry, a prediction algorithm reduces the total number of keystrokes. The
predictive algorithm in the present embodiment may reduce reduces a number of keystrokes by
0.399. Hence, Effective key-press is
Keff = wkt* 0.399 (2)
[0046] Further, as mentioned above, for predictive text entry the dynamic mental
operator comes into play. The dynamic mental operator corresponds to the additional cognitive load on the user for reading and selecting suggestions provided. The suggestions may change with each keystroke.
Thus, equation (1) becomes,
T = Th + Keff. Tk + w. d. Tm +KeffTdm (3)
where, Tdm= dynamic mental operator
= time for reading suggestions (dynamic mental (DM) operator) + time for one key press to select the word
=Tdm(op) +Tk
Putting the value of Tdm in (3), we get,
T = Th+ Ke f f. Tk + w. a. Tm + Ke f f.Tdm(op) + w. Tk (4)
[0047] For present scenario, one can neglect Th as it is of no use. Further. Tk is also
modified to take into account the search time of a key to be pressed in the onscreen keyboard and Ts= time for finding any key and moving focus needs to be considered. Thus equation (4) becomes,
T = Ke f f (Tk+ts) + w.d.Tm+Ke f f .Tdm(op)+w(Tk +1 (5)
[0048] In order to calculate the dynamic mental operator, six phrase sets may be selected
randomly. Users may be given an initial familiarization phrase and then may be asked to enter six phrases at one go using predictive onscreen keyboard. Time taken by each user and the number of keystrokes required to type the phrase may be recorded. Dynamic mental operator may be calculated by the calculation module 214 using equation (5). From Table 1, one gets, ts= time for finding any key and moving focus = 1.77s
Tk = time taken for button press = 0.02s
Tm = Mental preparation and thinking time = 1.35s
Values of w, Ke f f and total time taken for typing are different for different users and are
given in table 2 below.
[0049] The dynamic mental operator for different users using the first onscreen keyboard
is given in table 2 below. The average value of Tdm(op) was found to be 0.063.
Table 2: Dynamic mental operator for different users
User Total Effective No of f dm(op) in
time ke- words Sec
taken to presses typed
type in sec (Keff) (w)
1 228.62 65.43 30 0.0935
2 218.66 61.84 29 0.102
3 237.40 72.61 30 0.010
4 224.28 65.43 30 0.027
5 200.26 59.85 25 0.074
6 206.14 61.84 26 0.0515
7 216.98 63.84 28 0.064
8 224.5 65.43 30 0.035
9 212.52 63.04 27 0.0648
10 215.88 63.84 28 0.049
[0050] Similarly, the dynamic metal operator may be calculated for users using the
second onscreen keyboard. It may be understood that the dynamic mental operator indicates an additional amount of cognitive load which is directly proportional to non-comfort-ness of the user of the onscreen keyboard. In other words, more is the value of the dynamic mental operator, more will be the cognitive load, and more uncomfortable the user will be to use a particular onscreen keyboard. Therefore, it may be understood that the dynamic mental operator may facilitate further evaluation of the onscreen keyboards. Based upon the dynamic mental operator, a layout of an onscreen keyboard may be decided.
Validation using EEG signal
[0051] As mentioned above, in order to validate the first cognitive score, the second
cognitive score, and the dynamic metal operator (of each of the first onscreen keyboard and of the second onscreen keyboard), the validation module 216 may determine the first validation score, the second validation score, and a validation operator. A method performed by the validation module 216 is shown in Figure 3. The EEG signal of the user may be captured while the user is using the first onscreen keyboard. After the receiving the EEG signal, the validation module 216 may apply a Common Spatial Pattern (CSP) filter to the EEG signal to obtain a filtered EEG signal. Thereafter, the validation module 216 may extract certain EEG features from the filtered EEG signal using a sliding window approach known in the art. In one example, the EEG features may comprise log variance, HJORTH parameters, frequency band powers, and spectral distributions (See graphs shown in Figures 4A, 4B, and 4C). At a further step, the
validation module 216 may classify the EEG features using a linear Support Vector Machine (SVM). Thereafter, the validation module 216 may determine the first validation score for the first onscreen keyboard.
[0052] Similarly, the validation module 216 may determine a second validation score for
the second onscreen keyboard. The different of the first validation score and the second validation score is equal to the validation operator. The validation operator may be compared with the dynamic metal operator, calculated using the KLM-GOMS equations, to determine whether the validation operator is equal to the dynamic mental operator, thereby validating the dynamic mental operator.
[0053] It may be understood that the EEG of the signal of the user may be captured in
both the first scenario and in the second scenario. In the first scenario, the EEG signal may be
captured when both the first onscreen keyboard and the second onscreen keyboard are without
predictive text entry. In the second scenario, the EEG signal may be captured when both the first
onscreen keyboard and the second onscreen keyboard are assisted with predictive text entry.
[0054] Further, it may be understood that the first cognitive score, the second cognitive
score, and the validation operator may also be validated using a Stroop effect test, such as a Psychometric test known in the art. Based upon the above explanation, it may be understood that the several onscreen keyboard layouts may be evaluated by using the method and system 102 proposed in the present subject matter. For example, onscreen keyboard layouts having several arrangements of alphabets, characters, and numerals may be evaluated for determining a comfort of the users for one or more of those onscreen keyboard layouts. Based upon the comfort of a user for any particular onscreen keyboard, such a keyboard may be displayed on a display screen of the user for his use.
[0055] Referring now to Figure 5, a method 500 for evaluating onscreen keyboards is
shown, in accordance with an embodiment of the present subject matter. The method 500 may be described in the general context of computer executable instructions. Generally, computer executable instructions can include routines, programs, objects, components, data structures, procedures, modules, functions, etc., that perform particular functions or implement particular abstract data types. The method 500 may also be practiced in a distributed computing environment where functions are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, computer executable
instructions may be located in both local and remote computer storage media, including memory storage devices.
[0056] The order in which the method 500 is described is not intended to be construed as
a limitation, and any number of the described method blocks can be combined in any order to implement the method 500 or alternate methods. Additionally, individual blocks may be deleted from the method 500 without departing from the spirit and scope of the subject matter described herein. Furthermore, the method can be implemented in any suitable hardware, software, firmware, or combination thereof. However, for ease of explanation, in the embodiments described below, the method 500 may be considered to be implemented in the above described media system 102.
[0057] At block 502, a first set of parameters may be received from a user using a first
onscreen keyboard. The first set of parameters is indicative of a usability of the first onscreen keyboard. The user uses the first onscreen keyboard via a wireless remote. In one example, the first set of parameters may be received by the receiving module 212.
[0058] At block 504. a second set of parameters may be received from a user using a
second onscreen keyboard. The second set of parameters is indicative of a usability of the second onscreen keyboard. The user uses the second onscreen keyboard via a wireless remote. In one example, the second set of parameters may be received by the receiving module 212.
[0059] At block 506, a first cognitive score for the first onscreen keyboard may be
determined using the first set of parameters. In one example, the first cognitive score may be determined by the calculation module 214.
[0060] At block 508, a first cognitive score for the first onscreen keyboard may be
determined using the first set of parameters. In one example, the first cognitive score may be determined by the calculation module 214.
[0061] At block 510, the first cognitive score and the second cognitive score are
validated using an Electroencephalography (EEG) signal of the user, The EEG signal of the user is captured while the user is using the first onscreen keyboard and the second onscreen keyboard. In one example, the first cognitive score and the second cognitive score may be validated by the validation module 216.
[0062] Although implementations for methods and systems for evaluating onscreen
keyboards have been described in language specific to structural features and/or methods, it is to be understood that the appended claims are not necessarily limited to the specific features or methods described. Rather, the specific features and methods are disclosed as examples of implementations for a system and a method for evaluating onscreen keyboards.
WE CLAIM:
1. A method for evaluating two onscreen keyboards by determining cognitive scores associated with each of the two onscreen keyboards, the method comprising:
receiving a first set of parameters from a user using a first onscreen keyboard, wherein the first set of parameters is indicative of a usability of the first onscreen keyboard;
receiving a second set of parameters from the user using a second onscreen keyboard, wherein the second set of parameters is indicative of a usability of the second onscreen keyboard;
determining a first cognitive score for the first onscreen keyboard using the first set of parameters;
determining a second cognitive score for the second onscreen keyboard using the second set of parameters;
validating the first cognitive score and the second cognitive score using an Electroencephalography (EEG) signal of the user, wherein the EEG signal of the user is captured while the user is using the first onscreen keyboard and the second onscreen keyboard,
wherein the receiving the first set of parameters, the receiving the second set of parameters, the determining the first cognitive score, the determining the second cognitive score, and the validating are performed by a processor using programmed instructions stored in a memory.
2. The method of claim 1, wherein each of the first set of parameters and the second set of parameters comprises a pointing parameter P indicative of a sum of time required to locate a key on either of the first onscreen keyboard or the second onscreen keyboard and a time required to move a focus on the key using a wireless remote.
3. The method of claim 2, wherein the first cognitive score and the second cognitive score are determined using one or more of equations associated with a Keystroke Level Model -Goals, Operators, Methods, and Selection (KLM-GOMS) model, wherein the one or more
equations associated with the KLM-GOMS model are modified using the pointing parameter P.
4. The method of claim 1, wherein the user is unassisted with predictive text entry while using the first onscreen keyboard and while using the second onscreen keyboard.
5. The method of claim 1, further comprising determining a first cognitive load and a second cognitive load based upon the first cognitive score and the second cognitive score, respectively.
6. The method of claim 4, wherein the validating comprises:
determining a first validation score associated the first onscreen keyboard by capturing the EEG signal while the user is using the first onscreen keyboard without predictive text entry;
determining a second validation score associated the second onscreen keyboard by capturing the EEG signal while the user is using the second onscreen keyboard without predictive text entry;
comparing the first validation score with the first cognitive score to determine whether the first cognitive score is same as the first validation score, thereby validating the first cognitive score; and
comparing the second validation score with the second cognitive score to determine whether the second cognitive score is same as the second validation score, thereby validating the second cognitive score.
7. The method of claim 1, wherein the user is assisted with predictive text entry while using the first onscreen keyboard and while using the second onscreen keyboard.
8. The method of claim 7, further comprising determining a dynamic metal operator when the user is assisted with predictive text entry.
9. The method of claim 8, wherein the dynamic metal operator is determined using one or more of equations associated with a Keystroke Level Model - Goals, Operators, Methods, and Selection (KLM-GOMS) model, wherein the one or more equations associated with the KLM-GOMS model are modified using a pointing parameter P.
10. The method of claim 7. wherein the validating comprises:
determining a first validation score associated the first onscreen keyboard by capturing the EEG signal while the user is using the first onscreen keyboard with predictive text entry;
determining a second validation score associated the second onscreen keyboard by capturing the EEG signal while the user is using the second onscreen keyboard with predictive text entry;
comparing the first validation score with the first cognitive score to determine whether the first cognitive score is same as the first validation score; and
comparing the second validation score with the second cognitive score to determine whether the second cognitive score is same as the second validation score, thereby validating the first cognitive score and the second cognitive score.
11. The method of claim 1, further comprising
determining a dynamic metal operator when the user is assisted with predictive text entry while using the first onscreen keyboard and while using the second onscreen keyboard;
validating the dynamic metal operator.
12. The method of claim 11, wherein the validating the dynamic mental operator comprises:
determining a first validation score associated the first onscreen keyboard by capturing the EEG signal while the user is using the first onscreen keyboard with predictive text entry;
determining a second validation score associated the second onscreen keyboard by capturing the EEG signal while the user is using the second onscreen keyboard with predictive text entry;
determining a difference between the first validation score and the second validation score to generate a validation operator;
comparing the validation operator with the dynamic mental operator to determine whether the validation operator is same as the dynamic mental operator, thereby validating the dynamic mental operator.
13. The method of claim 12, wherein the determining the first validation score comprises:
receiving the EEG signal while the user is using the first onscreen keyboard;
applying a Common Spatial Pattern (CSP) filter to the EEG signal to obtain a filtered EEG signal;
extracting EEG features from the filtered EEG signal using a sliding window approach, wherein the EEG features comprises at least one of a log variance. HJORTH parameters, frequency band powers, and spectral distributions;
classifying the EEG features using a linear Support Vector Machine (SVM); and
determining the first validation score based upon the classification.
14. The method of claim 12. wherein the determining the second validation score comprises:
receiving the EEG signal while the user is using the second onscreen keyboard;
applying a Common Spatial Pattern (CSP) filter to the EEG signal to obtain a filtered EEG signal;
extracting EEG features from the filtered EEG signal using a sliding window approach, wherein the EEG features comprises at least one of a log variance, HJORTH parameters, frequency band powers, and spectral distributions;
classifying the EEG features using a linear Support Vector Machine (SVM): and
determining the second validation score based upon the classification.
15. A system for evaluating two onscreen keyboards by determining cognitive scores associated
with each of the two onscreen keyboards, the system comprising:
a processor; and
a memory coupled to the processor, wherein the processor is configured to execute a plurality of modules stored in the memory, and wherein the modules comprising:
a receiving module configured to
receiving a first set of parameters from a user using a first onscreen keyboard, wherein the first set of parameters is indicative of a usability of the first onscreen keyboard;
receive a second set of parameters from the user using a second onscreen keyboard, wherein the second set of parameters is indicative of a usability of the second onscreen keyboard; a calculation module configured to
determine a first cognitive score for the first onscreen keyboard using the first set of parameters;
determining a second cognitive score for the second onscreen
keyboard using the second set of parameters:
a validation module configured to validate the first cognitive score and the second
cognitive score using an Electroencephalography (EEG) signal of the user, wherein the EEG
signal of the user is captured while the user is using the first onscreen keyboard and the
second onscreen keyboard.
16. The system of claim 15, wherein validation module validates the first cognitive score and the second cognitive score by:
determining a first validation score associated the first onscreen keyboard by capturing the EEG signal while the user is using the first onscreen keyboard without predictive text entry;
determining a second validation score associated the second onscreen keyboard by capturing the EEG signal while the user is using the second onscreen keyboard without predictive text entry;
comparing the first validation score with the first cognitive score to determine whether the first cognitive score is same as the first validation score, thereby validating the first cognitive score; and
comparing the second validation score with the second cognitive score to determine whether the second cognitive score is same as the second validation score, thereby validating the second cognitive score.
17. The system of claim 15, wherein
the calculation module is further configured to determine a dynamic metal operator when the user is assisted with predictive text entry while using the first onscreen keyboard and while using the second onscreen keyboard; and
the validation module is further configured to validate the dynamic metal operator.
18. The system of claim 17, wherein the validation module validates the dynamic mental
operator by:
determining a first validation score associated the first onscreen keyboard by capturing the EEG signal while the user is using the first onscreen keyboard with predictive text entry:
determining a second validation score associated the second onscreen keyboard by capturing the EEG signal while the user is using the second onscreen keyboard with predictive text entry;
determining a difference between the first validation score and the second validation score to generate a validation operator;
comparing the validation operator with the dynamic mental operator to determine whether the validation operator is same as the dynamic mental operator, thereby validating the dynamic mental operator.
19. The system of claim 17, wherein the validation module determines the first validation score
by:
receiving the EEG signal while the user is using the first onscreen keyboard: applying a Common Spatial Pattern (CSP) filter to the EEG signal to obtain a filtered
EEG signal;
extracting EEG features from the filtered EEG signal using a sliding window
approach, wherein the EEG features comprises at least one of a log variance, HJORTH
parameters, frequency band powers, and spectral distributions;
classifying the EEG features using a linear Support Vector Machine (SVM): and determining the first validation score based upon the classification.
20. A non-transitory computer readable medium containing a computer program product for evaluating two onscreen keyboards by determining cognitive scores associated with each of the two onscreen keyboards, the non-transitory computer readable medium comprising:
a program code for receiving a first set of parameters from a user using a first onscreen keyboard, wherein the first set of parameters is indicative of a usability of the first onscreen keyboard;
a program code for receiving a second set of parameters from the user using a second onscreen keyboard, wherein the second set of parameters is indicative of a usability of the second onscreen keyboard;
a program code for determining a first cognitive score for the first onscreen keyboard using the first set of parameters;
a program code for determining a second cognitive score for the second onscreen keyboard using the second set of parameters:
a program code for validating the first cognitive score and the second cognitive score using an Electroencephalography (EEG) signal of the user, wherein the EEG signal of the user is captured while the user is using the first onscreen keyboard and the second onscreen keyboard.
| # | Name | Date |
|---|---|---|
| 1 | 2132-MUM-2013-IntimationOfGrant11-10-2023.pdf | 2023-10-11 |
| 1 | 2132-MUM-2013-Request For Certified Copy-Online(01-07-2014).pdf | 2014-07-01 |
| 2 | 2132-MUM-2013-PatentCertificate11-10-2023.pdf | 2023-10-11 |
| 2 | Form 3 [01-12-2016(online)].pdf | 2016-12-01 |
| 3 | ABSTRACT.jpg | 2018-08-11 |
| 3 | 2132-MUM-2013-Response to office action [22-08-2023(online)].pdf | 2023-08-22 |
| 4 | 2132-MUM-2013.pdf | 2018-08-11 |
| 4 | 2132-MUM-2013-Written submissions and relevant documents [08-08-2023(online)].pdf | 2023-08-08 |
| 5 | 2132-MUM-2013-FORM 3.pdf | 2018-08-11 |
| 5 | 2132-MUM-2013-Correspondence to notify the Controller [17-07-2023(online)].pdf | 2023-07-17 |
| 6 | 2132-MUM-2013-FORM-26 [17-07-2023(online)]-1.pdf | 2023-07-17 |
| 6 | 2132-MUM-2013-FORM 26(6-9-2013).pdf | 2018-08-11 |
| 7 | 2132-MUM-2013-FORM-26 [17-07-2023(online)].pdf | 2023-07-17 |
| 7 | 2132-MUM-2013-FORM 2.pdf | 2018-08-11 |
| 8 | 2132-MUM-2013-US(14)-HearingNotice-(HearingDate-28-07-2023).pdf | 2023-07-07 |
| 8 | 2132-MUM-2013-FORM 2(TITLE PAGE).pdf | 2018-08-11 |
| 9 | 2132-MUM-2013-CLAIMS [20-11-2019(online)].pdf | 2019-11-20 |
| 9 | 2132-MUM-2013-FORM 18.pdf | 2018-08-11 |
| 10 | 2132-MUM-2013-COMPLETE SPECIFICATION [20-11-2019(online)].pdf | 2019-11-20 |
| 10 | 2132-MUM-2013-FORM 1.pdf | 2018-08-11 |
| 11 | 2132-MUM-2013-FER_SER_REPLY [20-11-2019(online)].pdf | 2019-11-20 |
| 11 | 2132-MUM-2013-FORM 1(8-7-2013).pdf | 2018-08-11 |
| 12 | 2132-MUM-2013-DRAWING.pdf | 2018-08-11 |
| 12 | 2132-MUM-2013-OTHERS [20-11-2019(online)].pdf | 2019-11-20 |
| 13 | 2132-MUM-2013-DESCRIPTION(COMPLETE).pdf | 2018-08-11 |
| 13 | 2132-MUM-2013-FER.pdf | 2019-05-20 |
| 14 | 2132-MUM-2013-ABSTRACT.pdf | 2018-08-11 |
| 14 | 2132-MUM-2013-CORRESPONDENCE.pdf | 2018-08-11 |
| 15 | 2132-MUM-2013-CLAIMS.pdf | 2018-08-11 |
| 15 | 2132-MUM-2013-CORRESPONDENCE(8-7-2013).pdf | 2018-08-11 |
| 16 | 2132-MUM-2013-CORRESPONDENCE(6-9-2013).pdf | 2018-08-11 |
| 17 | 2132-MUM-2013-CORRESPONDENCE(8-7-2013).pdf | 2018-08-11 |
| 17 | 2132-MUM-2013-CLAIMS.pdf | 2018-08-11 |
| 18 | 2132-MUM-2013-CORRESPONDENCE.pdf | 2018-08-11 |
| 18 | 2132-MUM-2013-ABSTRACT.pdf | 2018-08-11 |
| 19 | 2132-MUM-2013-DESCRIPTION(COMPLETE).pdf | 2018-08-11 |
| 19 | 2132-MUM-2013-FER.pdf | 2019-05-20 |
| 20 | 2132-MUM-2013-DRAWING.pdf | 2018-08-11 |
| 20 | 2132-MUM-2013-OTHERS [20-11-2019(online)].pdf | 2019-11-20 |
| 21 | 2132-MUM-2013-FER_SER_REPLY [20-11-2019(online)].pdf | 2019-11-20 |
| 21 | 2132-MUM-2013-FORM 1(8-7-2013).pdf | 2018-08-11 |
| 22 | 2132-MUM-2013-COMPLETE SPECIFICATION [20-11-2019(online)].pdf | 2019-11-20 |
| 22 | 2132-MUM-2013-FORM 1.pdf | 2018-08-11 |
| 23 | 2132-MUM-2013-CLAIMS [20-11-2019(online)].pdf | 2019-11-20 |
| 23 | 2132-MUM-2013-FORM 18.pdf | 2018-08-11 |
| 24 | 2132-MUM-2013-US(14)-HearingNotice-(HearingDate-28-07-2023).pdf | 2023-07-07 |
| 24 | 2132-MUM-2013-FORM 2(TITLE PAGE).pdf | 2018-08-11 |
| 25 | 2132-MUM-2013-FORM-26 [17-07-2023(online)].pdf | 2023-07-17 |
| 25 | 2132-MUM-2013-FORM 2.pdf | 2018-08-11 |
| 26 | 2132-MUM-2013-FORM-26 [17-07-2023(online)]-1.pdf | 2023-07-17 |
| 26 | 2132-MUM-2013-FORM 26(6-9-2013).pdf | 2018-08-11 |
| 27 | 2132-MUM-2013-FORM 3.pdf | 2018-08-11 |
| 27 | 2132-MUM-2013-Correspondence to notify the Controller [17-07-2023(online)].pdf | 2023-07-17 |
| 28 | 2132-MUM-2013.pdf | 2018-08-11 |
| 28 | 2132-MUM-2013-Written submissions and relevant documents [08-08-2023(online)].pdf | 2023-08-08 |
| 29 | ABSTRACT.jpg | 2018-08-11 |
| 29 | 2132-MUM-2013-Response to office action [22-08-2023(online)].pdf | 2023-08-22 |
| 30 | Form 3 [01-12-2016(online)].pdf | 2016-12-01 |
| 30 | 2132-MUM-2013-PatentCertificate11-10-2023.pdf | 2023-10-11 |
| 31 | 2132-MUM-2013-IntimationOfGrant11-10-2023.pdf | 2023-10-11 |
| 31 | 2132-MUM-2013-Request For Certified Copy-Online(01-07-2014).pdf | 2014-07-01 |
| 1 | 2132mum2013_13-05-2019.pdf |