Sign In to Follow Application
View All Documents & Correspondence

Providing Adaptive Content

Abstract: The present invention discloses systems and methods for providing content adapted for a learner. A learner profile database (212) stores information about the learner’s cognitive ability and/or previous knowledge, and a content collection (210) stores educational/training content from which at least one content module is derived for the learner. An application server (116) includes a user interaction unit (202) enabling a teacher (108) and the learner to interact with the system, a content collection manager (204) enabling the teacher (108) to manage the content collection (210), a learner profile manager (206) for creating and updating the learner profile database (212), and an adaptive content creator (208) which, in response to a request from user interaction unit (212) for creating a content module for the learner, derives a content module from content collection (210) based on the learner’s profile stored in learner profile database (212). [FIG. 2 to accompany the Abstract]

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
25 November 2020
Publication Number
21/2022
Publication Type
INA
Invention Field
COMMUNICATION
Status
Email
patent@ipnext.in
Parent Application

Applicants

DRONSTUDY PVT. LTD.
A, FLOOR-10, 1004, NEST HOUSE APARTMENT, OPP. D.R.B. COLLEGE, BHATAR ROAD, SURAT, GUJARAT-395007, INDIA

Inventors

1. NEETIN AGRAWAL
A, FLOOR-10, 1004, NEST HOUSE APARTMENT, OPP. D.R.B. COLLEGE, BHATAR ROAD, SURAT, GUJARAT-395007, INDIA
2. K PRABHU PRAKASH
A, FLOOR-10, 1004, NEST HOUSE APARTMENT, OPP. D.R.B. COLLEGE, BHATAR ROAD, SURAT, GUJARAT-395007, INDIA
3. NIKHIL SHARMA
A, FLOOR-10, 1004, NEST HOUSE APARTMENT, OPP. D.R.B. COLLEGE, BHATAR ROAD, SURAT, GUJARAT-395007, INDIA

Specification

DESC:PROVIDING ADAPTIVE CONTENT
FIELD OF INVENTION
The present invention generally relates to providing adaptive content to one or more learners. In particular, the present invention relates to providing educational and/or training related content that has been specially adapted for a learner based on factors such as the cognitive ability of the learner and previous knowledge of the learner.
BACKGROUND OF THE INVENTION
Online learning has seen a dramatic increase over the past decade. A wide variety of educational content targeting various learner segments ranging from pre-nursery to K-12, to professional courses and continuing education, is now available online.
Many leading players in various education segments now use online content to augment or even replace classroom teaching. For example, in higher education, leading universities such as Indian Institutes of Technology (IIT), Massachusetts Institute of Technology (MIT), Harvard University, Indian Institutes of Management (IIM) and the Indian School of Business (ISB), to name a few, offer online content in various forms such as Massive Open Online Courses (MOOC), distance learning, executive education, and so on. Similarly, in the secondary education segment, players such as Byju's (M/s Think and Learn Pvt. Ltd.), Toppr (M/s Toppr Technologies Pvt. Ltd.), Vedantu (M/s Vedantu Innovations Pvt. Ltd.), Unacademy (Sorting Hat Technologies Pvt Ltd), and Doubtnut (M/s Class 21A Technologies Pvt. Ltd.), to name a few, offer online content that can supplement or replace classroom format teaching.
The growing popularity of online learning is evidenced by the rise of many online learning solution players in various countries. For example, in the United States of America, solutions from companies such as Coursera, Inc., Udemy, Inc., Course Hero, Inc., Quizlet, Inc., Guild Education, Inc., Udacity, Inc., Age of Learning, Inc., and so on have gained tremendous popularity.
Conventional online learning solutions offer many compelling benefits that cannot be matched in the traditional classroom format. For example, a learner can study an online course at a time and place of her choosing. Further, a learner can revisit the content as many times as she likes, repeat certain segments that she found difficult, skip over other segments that she already knows, and so on.
On the other hand, conventional online learning solutions also have some drawbacks. For example, conventional online learning systems are unable to adapt their content for each learner. In a traditional classroom, a teacher often adapts her content delivery based on factors such as the cognitive ability and previous knowledge of a learner. The teacher may adapt various aspects such as pace of delivery, depth of content, number and type of illustrative examples, and style of delivery, to better suit the learner(s). In contrast, conventional online learning solutions usually serve up the same educational content to thousands or even millions of users. Each learner has her own cognitive ability and knowledge level, and the ‘one size fits all’ approach of online learning solutions reduces learner engagement. Further, manual efforts by the learner to repeat, skip, and pause content to better follow the teaching distract the learner and lead to negatively impact the learning outcome.
Some conventional online learning solutions attempt to provide a few different versions of content adapted to a few different categories of learners, e.g. different content for beginner, advanced, and expert learners. However, this approach is not only cumbersome, but provides limited adaptability, and often fails to adequately tailor the content in a manner that leads to a better learner experience and better learning outcomes. As the number of categories increases, the time and money investment for creating different versions rises. With this approach, it is often impractical and sometimes impossible to cater to a wide variety of learners, each having different abilities and knowledge.
Thus, there is a need for adaptive online learning solutions that provide educational content adapted to the abilities and knowledge of the learner. Further, such solutions must be practical and effective. They must be implementable with reasonable investment of money and time and must provide flexibility to cater to a wide variety of learners.
OBJECT OF THE INVENTION
It is an object of the present invention is to provide adaptive content.
It is another object of the present invention to provide adaptive content for educational and training purposes.
It is another object of the present invention to provide adaptive content that is tailored to the cognitive ability and/or the prior knowledge of a learner.
It is another object of the present invention to increase learner interest and improve learning outcomes.
It is another object of the present invention to replicate the adaptive style of classroom or face-to-face teaching in online learning solutions.
STATEMENT OF THE INVENTION
Brief Description of Drawings
FIG. 1 shows an example environment of an adaptive content system according to an embodiment of the present invention.
FIG. 2 shows an adaptive content system according to an embodiment of the present invention.
FIG. 3 shows an example hierarchy of a content collection according to an embodiment of the present invention.
FIG. 4 shows an example partial hierarchy according to an embodiment of the present invention.
FIG. 5 shows a method of managing a content collection according to an embodiment of the present invention.
FIG. 6 shows a method of managing a learner profile according to an embodiment of the present invention.
FIG. 7 shows a method of creating a content module according to an embodiment of the present invention.
Detailed Description
The following is a detailed description of example embodiments to illustrate the principles of the invention. The embodiments are provided to illustrate aspects of the invention, but the invention is not limited to any embodiment. The scope of the invention encompasses numerous alternatives, modifications and equivalent; it is limited only by the claims.
Further, throughout this disclosure, the singular terms “a,” “an,” and “the” include plural referents unless the context clearly indicates otherwise. Similarly, the word “or” is intended to include “and” unless the context clearly indicates otherwise.
Numerous specific details are set forth in the following description to provide a thorough understanding of the invention. However, the invention be practiced according to the claims without some or all of these specific details. For clarity, technical material that is known in the technical fields related to the invention has not been described in detail so that the invention is not unnecessarily obscured.
The present invention discloses systems and methods for providing adaptive content. In various embodiments, a content module for presentation to a learner is derived from a content collection in a manner that takes into consideration the learner’s cognitive ability and/or prior knowledge and tailors the content module accordingly.
Content Collection
The content collection is the gamut of educational/training content from which content modules are derived for different learners. In various embodiments, the content collection can include, for example, the content included in a particular educational course, the content prepared by a particular educational content provider, the content related to a particular subject, or various combinations thereof. The content can include, without limitation, video content, audio content, textual content, graphic content, or a combination of the foregoing.
Content Module
The content module is a unit of educational content tailored for a particular learner. In various embodiments, the content module is the educational content for, for example, a class or a lecture, or content covering a particular topic or sub-topic.
The content collection is a universal set. The content module is its subset, culled in view of a learner’s cognitive ability and/or prior knowledge. It will be apparent to a person skilled in the art that in different embodiments, the content collection and the content module can be designed in a wide variety of different ways without deviating from the spirit and scope of the present invention.
OVERVIEW
FIG. 1 shows an example environment of an adaptive content system according to an embodiment of the present invention.
The figure shows an adaptive content system (ACS) 102 for providing adaptive content to a first learner 104 and a second learner 106 among a plurality of learners (not shown). The content, as well as information useful for adapting the content for various learners, is provided to ACS 102 by a teacher 108. First learner 104, second learner 106 and teacher 108 interact with ACS 102 via user devices 110, 112, and 114 respectively.
ACS 102 includes an application server (AS) 116 for implementing various functions as described with reference to the systems and methods disclosed herein. Further, ACS 102 includes a database (DB) 118 for storing at least one content collection, and a plurality of learner profiles.
AS 116 allows teacher 108 to create and maintain a content collection in DB 118.
Further, AS 116 allows a learner to register with ACS 102, and creates and/or maintains a learner profile for each registered learner. In an embodiment, a learner profile stores information about the cognitive ability and/or prior knowledge of the learner.
Further still, AS 116 handles requests for providing adaptive content. Specifically, AS 116 handles a request for a content module pertaining to a concept, wherein the request is associated with a particular learner. In response to the request, AS 116 accesses a content collection in DB 118 to create a content module that is specifically matched to the learner’s cognitive ability and/or prior knowledge.
Thus, in the illustrated example, consider that first learner 104 and second learner 106 have different cognitive ability and/or prior knowledge. When ACS 102 handles a request for a content module for a topic in connection with first learner 104, it creates a first content module and sends it to user device 110 for presenting to first learner 104. On the other hand, when ACS 102 handles a request for a content module for the same topic but in connection with second learner 106, it creates a second content module that is different from the first content module, and sends it to user device 112 for presenting to second learner 106.
Thus, when accessing content related the same topic/concept from ACS 102, first learner 104 and second learner 106 receive different content modules specially adapted to their own cognitive ability and/or prior knowledge.
ADAPTIVE CONTENT SYSTEM
FIG. 2 shows an adaptive content system according to an embodiment of the present invention.
The figure shows ACS 102, including AS 116 and DB 118. AS 116 further includes a user interaction unit (UIU) 202, a content collection manager (CCM) 204, a learner profile manager (LPM) 206, and an adaptive content creator (ACC) 208. Similarly, DB 118 further includes a content collection (CC) 210 and a learner profile database (LPD) 212.
UIU 202 allows teachers and learners to interact with ACS 102 via their respective user devices, such as user devices 110, 112, and 114 (not shown). In various embodiments, user interaction unit 202 includes, without limitation, a website, a web service, an application programming interface (API), and/or combinations of the foregoing.
CCM 204 allows teacher 108 access to content collection (CC) 210. In various embodiments, CCM 204 allows teacher 108 to create, manage, update, modify, and/or delete CC 210.
LPM 206 manages LPD 212. In an embodiment, when a new learner registers with ACS 102, LPM 206 creates the new learner’s profile in LPD 212. Over time, as the learner uses ACS 102, LPM 206 updates the learner’s profile as described later in this description.
ACC 208 receives a request from UIU 202 for creating a content module pertaining to a topic/concept in connection with a particular learner, and uses CC 210 and LPD 212 to generate a content module adapted to the learner’s cognitive ability and/or previous knowledge as described later in this description. ACC 208 provides the content module to UIU 202, which in turn further presents it to the learner’s user device.
In an embodiment, the content module includes references to one or more content segments, such as addresses of or hyperlinks to one or more video files. UIU 202 uses the references to present the content segments on the learner’s user device.
In another embodiment, the content module created by ACC 208 includes content, such as one or more video files. UIU 202 sends the content to the learner’s user device for presenting to the learner.
HIERARCHY
In an embodiment, content in CC 210 is organized in a hierarchy. It will be apparent to a person skilled in the art that in various embodiments, the content collection can be organized in other structures besides a hierarchy, and other aspects of the present invention can be suitably adapted without deviating from the spirit and scope of the present invention. For example, the content collection can be organized as a graph, a list, a partially ordered set, and so on.
FIG. 3 shows an example hierarchy in CC 210 according to an embodiment of the present invention.
The figure shows a hierarchy 300 comprising nodes 302 to 324 arranged in levels 1 to 5. Node 302, the base node of hierarchy 300, denotes that CC 210 pertains to the subject ‘PHYSICS’. Nodes 304 and 306, at level 2 under node 302, denote the class for which the ‘PHYSICS’ content is intended, i.e. ‘CLASS 6’ and ‘CLASS 10’ respectively. Further, nodes 308 and 310, at level 3 under node 306, show topics under the ‘CLASS 10’, namely ‘ELECTRICITY’ and ‘NEWTON’S LAWS’ respectively. Nodes 312 and 314, at level 5 under node 310, show sub-topics under ‘NEWTON’S LAWS’, namely ‘1ST LAW’ and ‘2ND LAW’ respectively.
Finally, leaf nodes 316-318, at level 5 under node 314, pertain to various concepts under the ‘2ND LAW’.
Table 1 below illustrates an example dot (.) notation scheme to address various nodes in hierarchy 300.
Node Address
102 Physics
104 Physics.Class6
106 Physics.Class10
108 Physics.Class10.Electricity
110 Physics.Class10.NewtonsLaws
112 Physics.Class10.NewtonsLaws.1stLaw
114 Physics.Class10.NewtonsLaws.2ndLaw
116 Physics.Class10.NewtonsLaws.2ndLaw.Definition1
118 Physics.Class10.NewtonsLaws.2ndLaw.Definition2
120 Physics.Class10.NewtonsLaws.2ndLaw.Application
122 Physics.Class10.NewtonsLaws.2ndLaw.Example1
124 Physics.Class10.NewtonsLaws.2ndLaw.Example1
Table 1 – Nodes and Addresses
It will be apparent to a person skilled in the art that various other schemes for addressing nodes in a hierarchy, such as Uniform Resource Identifier (URI), XPath, etc. are known in the art, and that any such scheme can be suitably used in conjunction with the present invention.
CONCEPTS
A content collection includes a plurality of concepts. A concept is a node in the hierarchy of the content collection.
Pre-requisites
A first concept is a pre-requisite for a second concept when in order to understand the second concept, the learner needs to first understand the first concept.
SEGMENTS
In an embodiment, each concept is associated with one or more segments. A segment is a unit of educational content. In various embodiments, a segment includes a video clip, an audio clip, a textual snippet, a graphic image, and or a combination of the foregoing. Each segment has associated metadata for facilitating adaptation of the educational content for a learner.
Segments associated with a concept are different ways of presenting the same concept. Thus, while one segment is optimal for presenting the concept to one learner, another segment may be better suited for another learner. For example, different segments associated with a concept may differ in the pace of teaching, in including or excluding calculations steps while discussing a numerical problem, in explaining the concept at different levels of detail, in whether they use animation to illustrate an application of the concept or not, and so on.
For example, particularly sharp learners who have high cognitive abilities tend to prefer a faster pace of teaching, and may lose interest, grow impatient, or get distracted if they find the pace too slow. Similarly, learners who have at least some prior knowledge of the concept could revisit the concept to refresh their memory or clear some doubt. Such learners tend to benefit most from a fast-paced high-level teaching of the concept, without spending much time on examples. On the other hand, learners with relatively lower cognitive abilities and/or prior knowledge tend to prefer a slower pace of teaching.
Table 2 below illustrates a scheme for addressing a first segment, a second segment, and a third segment associated with the concept Physics.Class10.NewtonsLaws.2ndLaw.Definition1.
Segment Address
First Physics.Class10.NewtonsLaws.2ndLaw.Definition1(1)
Second Physics.Class10.NewtonsLaws.2ndLaw.Definition1(2)
Third Physics.Class10.NewtonsLaws.2ndLaw.Definition1(3)
Table 2 – Example Segments and Addresses
COGNITIVE PROFILES
A cognitive profile is a measure of cognitive ability, which is the mental capability involving the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience.
An example cognitive profile scheme for use in conjunction with Science, Technology, Engineering and Mathematics (STEM) educational content collections is disclosed herein. It measures eight (8) parameters of cognitive ability on a scale of 1 to 10 (10 being the highest) in an 8x1 array C, having elements c1 to c8 as presented below in Table 3.
Element Parameter
C(1) Ability to apply theory to applications or problem solving
C(2) Ability to identify and recall patterns
C(3) Ability to think creatively or out of the box
C(4) Ability to visualize 3D structure
C(5) Ability to understand abstract concepts
C(6) Ability to do fast and accurate calculations
C(7) Ability to understand complex diagrams
C(8) Ability to find errors
Table 3 – Example Parameters for Measuring Cognitive Ability
Cognitive Requirement of a Segment (CS)
In various embodiments, each segment S in CC 210 has a cognitive requirement profile CS associated with it. CS represents the cognitive ability required to understand the content in segment S. For example, a CS = [1, 0, 2, 9, 3, 0, 8, 1] indicates a visually inclined segment that requires the learner to visualize 3D structures (CS(4) = 9) and understand complex diagrams (CS(7) = 8), but does not need the ability to identify and recall patterns (CS(0) = 0) or do fast and accurate calculations (CS(6) = 0).
Cognitive Ability of a Learner (CL)
Similarly, in various embodiments, a cognitive ability profile CL of a learner L is maintained. CL represents learner L’s cognitive ability.
Cognitive Distance
In various embodiments, ACS 102 calculates a distance between the cognitive ability CL of a learner L with the cognitive requirement CS of a segment S.
The cognitive distance between any two cognitive profiles CS and CL, denoted by dist(CS, CL), is a measure of how similar those profiles are to each other. A lower distance indicates a greater similarlity.
A variety of mathematical techniques to suitably calculate dist(CS, CL) are well known, such as, but not limited to, a root mean square distance given by the formula:
dist(CS, CL)= v((?_(i=1)^N¦(C_S (i)-C_L (i))^2 )/N) … Eq. (1)
where N is the number of parameters in the cognitive profiles.
Similarly, another analogous technique is to calculate the sum of absolute differences, as per the following formula:
dist(CS, CL)= ?_(i=1)^N¦|C_S (i)-C_L (i)| … Eq. (2)
where N is the number of parameters in the cognitive profiles.
It will be apparent to a person skilled in the art that any such technique may be employed, without limitation, to calculate the cognitive distance between two cognitive profiles.
Cognitive Comparison
In various embodiments, ACS 102 compares the cognitive ability CL of a learner L with the cognitive requirement CS of a segment S.
The cognitive comparison between any two cognitive profiles CS and CL, denoted by CS > CL, is a measure of how one cognitive profile compares with the other, and specifically whether the first cognitive profile (i.e. CS) represents a greater cognitive ability than the second (i.e. CL).
A variety of mathematical techniques to suitably evaluate CS > CL are well known. For example, CS > CL is true if:
?_(i=1)^N¦?C_S (i) ? > ?_(i=1)^N¦?C_L (i) ? … Eq. (3)
where N is the number of parameters in the cognitive profiles.
Similarly, other such analogous techniques are well known. It will be apparent to a person skilled in the art that any such technique may be employed, without limitation, to evaluate the truth value of the comparison CS > CL for any two cognitive profiles CS and CL.
While the foregoing example cognitive profile scheme has been used to illustrate the principles of the present invention, it is pertinent to note that measurement of cognitive ability is a subject of extensive and ongoing research. Many schemes for such testing and measurement cognitive ability are well known, and many more are likely to be developed in the future. Some schemes are better suited for certain type of content than others. A person skilled in the art would appreciate that any known or future cognitive profile scheme may be judiciously selected for use in conjunction with the present invention without deviating from its spirit and scope.
METHODS FOR PROVIDING ADAPTIVE CONTENT
FIG. 4 shows an example partial hierarchy according to an embodiment of the present invention.
The figure shows hierarchy 400, with nodes as listed in the table below.
Reference Numeral Concept Description Address
402 The topic of Newton’s 2nd Law of motion 2ndLaw
404 A first definition of Newton’s 2nd law: “The force applied on a body is equal to the rate of change of momentum of the body.” 2ndLaw.Definition1
406 A second definition of Newton’s 2nd law: “The force applied on a body is equal to the mass of the body multiplied by the acceleration of the body.” 2ndLaw.Definition2
408 Proof that both definitions can be derived from each other 2ndLaw.Proof
410 Numerical based on simple formulae of Newton’s 2nd law 2ndLaw.SimpleNum1
412 Numerical based on simple formulae of Newton’s 2nd law 2ndLaw.SimpleNum2
414 Numerical based on simple formulae of Newton’s 2nd law 2ndLaw.SimpleNum3
416 Tricky numerical 2ndLaw.TrickyNum1
418 Tricky numerical in which a practical real-life situation is involved 2ndLaw.TrickyNum2
420 Application of Newtons law when a fielder in cricket stops the ball 2ndLaw.Application1
422 Application of Newtons law when a high jumper lands on soft bed 2ndLaw.Application2
Table 4 – Example hierarchy 400
Example hierarchy 400 is now used to illustrate various principles of the present invention.
FIG. 5 shows a method of managing a content collection (CC 210) according to an embodiment of the present invention.
In various embodiments, the method described with reference to FIG. 5 is performed by CCM 204 while interacting with teacher 108 via UIU 202.
First, at step 502, various concepts falling under the specified topic are organized in a hierarchy. For example, hierarchy 400 shows concepts falling under the topic 2ndLaw. In an embodiment, said organization is done by teacher 108 via a suitable user interface provided via UIU 202 allowing her to directly manipulate the hierarchy. In another embodiment, UIU 202 provides an interface allowing teacher 108 to import hierarchy 400 from an existing source, such as another content collection or another hierarchy in an appropriate format, such as eXtensible Markup Language (XML), JavaScript Object Notation (JSON), and the like.
In a hierarchy, some concepts are pre-requisites of others and/or occur in a certain order as per the logical flow of content. In other words, some concepts must be presented to the learner before other concepts. At step 504, a partially ordered set (or poset) for the topic is created based on the hierarchy. The poset records ordinal dependencies among the concepts in the hierarchy.
For example, a poset 400P for hierarchy 400 is shown in the table below.
Row Concept(s)
R(1) 2ndLaw.Definition1, 2ndLaw.Definition2
R(2) 2ndLaw.Proof
R(3) 2ndLaw.SimpleNum1, 2ndLaw.SimpleNum2, 2ndLaw.SimpleNum3, 2ndLaw.TrickyNum1, 2ndLaw.TrickyNum2
R(4) 2ndLaw.Application1, 2ndLaw.Application2
Table 5 – Example poset 400P for hierarchy 400
In the notation used in Table 5, concepts listed in a row R(n) must be presented to the learner before those listed in a row R(m), for all m greater than n. Further, all concepts in the same row R(n) can be presented to a learner in any order. Poset 400P records ordinal dependencies of hierarchy 400. The concepts listed in R(1) must be presented before those in R(2). Similarly, R(2) concepts must precede R(3) concepts, and R(3) concepts must precede R(4) concepts.
In an embodiment, said ordinal dependencies are inferred from the hierarchy itself. For example, the order for presenting concepts in the hierarchy is obtained from a depth-first traversal (DFT) of the hierarchy, or a breadth-first traversal (BFT) of the hierarchy, or a judicious combination of the foregoing. In another embodiment, said ordinal dependencies are specified by teacher 108 through a suitable interface provided by UIU 202. In yet another embodiment, CCM 204 auto-generates the poset based on a suitable combination of BFT and DFT, and thereafter allows teacher 108 to update the auto-generated poset via a suitable interface.
Then at optional step 506, one or more external pre-requisite concepts are added to the poset. External pre-requite concepts are pre-requisites for one or more concepts within the topic / hierarchy under consideration, but do not themselves fall under the topic / hierarchy. For example, an understanding of the concepts of momentum and acceleration is necessary for understanding Newton’s 2nd Law, thus making them pre-requisites for concepts 2ndLaw.Definition1 and 2ndLaw.Definition2, respectively. However, these concepts do not fall under hierarchy 400, thus they are external pre-requisites. Consider, for example, that they are covered in another hierarchy with root node Mechanics at addresses Mechanics.Momentum and Mechanics.Acceleration, respectively. These are added to poset 400P to obtain a topic poset 400Q as shown in the table below.
Row Concept(s)
R(0) Mechanics.Momentum, Mechanics.Acceleration
R(1) 2ndLaw.Definition1, 2ndLaw.Definition2
R(2) 2ndLaw.Proof
R(3) 2ndLaw.SimpleNum1, 2ndLaw.SimpleNum2, 2ndLaw.SimpleNum3, 2ndLaw.TrickyNum1, 2ndLaw.TrickyNum2
R(4) 2ndLaw.Application1, 2ndLaw.Application2
Table 6 – Example topic poset 400Q including external pre-requisites of 400P
In an embodiment, external pre-requisites are added in the new row at the beginning of the poset (as shown in Table 6). In another embodiment, they are added in a new row inserted right before the row of the concept for which they are a pre-requisite. E.g. an external pre-requisite of R(1) concept 2ndLaw.Proof is added in a new row inserted between R(1) and R(2). In various embodiments, external pre-requisites can be added in any row preceding the row of the concept for which they are a pre-requisite.
In embodiments where optional step 506 is omitted, no external pre-requisites are added to the poset, and posets 400P and 400Q are therefore identical.
In an embodiment, each concept in topic poset 400Q is tagged as either ‘Mandatory’ or ‘Optional’. A ‘Mandatory’ tag implies that the concept must be included in a content module generated for topic poset 400Q. On the other hand, an ‘Optional’ tag implies that a content module for topic poset 400Q may or may not include the concept. For example, in topic poset 400Q, theoretical concepts such as 2ndLaw.Definition1, 2ndLaw.Definition2, and 2ndLaw.Proof are tagged ‘Mandatory’, while other concepts relating to applications, numerical problems, and external pre-requisites are tagged ‘Optional’. This tagging denotes that in order to teach Newton’s 2nd Law, it is necessary to teach all the theoretical concepts, but some applications, numericals, and/or external pre-requisites can be omitted from the corresponding content module, as described later in this description.
In an embodiment, the tagging of concepts as ‘Mandatory’ or ‘Optional’ is done by teacher 108 using a suitable user interface made available via UIU 202.
In an embodiment, each concept in topic poset 400Q also has an associated interest rating. The interest rating can be on any suitable scale, such as a binary scale, or a scale of 1 to 10. The interest rating of a concept indicates the degree of interest that the concept ignites in learners.
In an embodiment, as and when a concept is added, it is assigned a default interest rating, which may be manually overridden by teacher 108. Further, in various embodiments, the interest rating is updated using feedback from learners who have studied the concept. The feedback is obtained either explicitly through polls and tests or implicitly by observing the learner’s activity.
Then at step 508, one or more segments are associated with each concept in the poset. For example, an associations chart shown in the table below lists the segments associated with concepts in topic poset 400Q.
Row Concept(s) Segment(s)
R(0) Mechanics.Momentum Momentum(0), Momentum(1)
Mechanics.Acceleration Acceleration(0), Acceleration(1), Acceleration(2)
R(1) 2ndLaw.Definition1 Definition1(0), Definition1(1)
2ndLaw.Definition2 Definition2(0)
R(2) 2ndLaw.Proof Proof(0)
R(3) 2ndLaw.SimpleNum1 SimpleNum1(0), SimpleNum1(1)
2ndLaw.SimpleNum2 SimpleNum2(0)
2ndLaw.SimpleNum3 SimpleNum3(0)
2ndLaw.TrickyNum1 TrickyNum1(0)
2ndLaw.TrickyNum2 TrickyNum2(0)
R(4) 2ndLaw.Application1 Application1(0), Application1(1)
2ndLaw.Application2 Application2(0)
Table 7 – Association chart listing segments associated with concepts in topic poset 400Q
In the interest of conciseness and improved readability in connection with Table 7, segment addresses are shortened to only their last-level node name with segment number in parenthesis. For example, segment 2ndLaw.Definition1(1) is simply referred to as Definition1(1).
As Table 7 shows, at least one segment is associated with each concept. Further, concepts Mechanics.Momentum, 2ndLaw.Definition1, 2ndLaw.SimpleNum1, and 2ndLaw.Application1 have two associated segments, while concept Mechanics.Acceleration has three.
Then at step 510, a value of cognitive requirement CS is assigned to each segment associated with the poset. In an embodiment, as and when a new segment is associated with a concept, it is assigned a default CS value, which may be manually overridden by teacher 108. In various embodiments, the CS value is refined over time. After segment S is presented to a learner, the learner’s impression of segment S is sought through a poll. The poll can be conducted each time segment S is presented. Alternatively, it can be randomly conducted only sometimes after segment S is presented. In an embodiment, the poll asks the learner how difficult, easy, or interesting she found segment S to be. The poll results are used to update CS for the segment automatically and/or manually.
In an embodiment, cognitive requirement CS of a segment is updated as the average of the cognitive ability CL of the learners who found the segment appropriate (i.e. neither too tough nor too easy). Optionally, the feedback data from other learners who found the segment too easy or too tough is also considered. For learners who found the segment easy, an adjusted cognitive profile CL_adjusted is calculated by subtracting an offset cognitive profile COFF from CL. For learners who found the segment tough, an adjusted cognitive profile CL_adjusted is calculated by adding an offset cognitive profile COFF to CL. The COFF values thus obtained are used in the averaging for calculating the updated CS.
FIG. 6 shows a method of managing a learner profile according to an embodiment of the present invention. The method is described below with the example of a learner L with cognitive ability CL.
In various embodiments, the method described with reference to FIG. 6 is performed by LPM 206 using factors such as learner L’s activity, feedback, and/or testing on ACS 102 as observed by UIU 202.
At step 602, cognitive ability CL is maintained using one or more of learner activity tracking, learner feedback, and learner testing.
Learner Activity Tracking
In an embodiment, UIU 202 tracks the learner L’s activity and reports it to LPM 206 in an activity log. The tracked parameters include at least the content segments presented to the learner and the learner’s actions while consuming those segments. The learner’s actions include pausing of the content, repeating the content, skipping over the content, and so on.
LPM 206 performs a heuristic analysis of the activity log to assess the learner’s reaction to each segment. For example, if the learner repeats the content multiple times, or drops off, then the behavior indicates that the learner found the content difficult to understand. On the other hand, if the learner skips over or speeds up the content, then the behavior indicates that the learner found the content easy to understand. Finally, if the learner consumes the content segment at its natural pace, i.e. without taking any actions to increase or decrease the pace, then the behavior indicates that the user found the segment appropriate.
Based on said heuristic analysis, LPM 206 refines its assessment of the learner’s cognitive ability CL.
The foregoing heuristic analyses are only exemplary for illustrating the principles of an embodiment the present invention. A person skilled in the art will appreciate that a variety of different activities can be tracked, and different heuristic assessment schemes employed in conjunction with the present invention without deviating from its spirit and scope.
Learner Feedback
In an embodiment, the learner is explicitly prompted for feedback on how difficult or easy the learner found a segment S to be. Based on the learner’s feedback, LPM 206 refines its assessment of the learner’s cognitive ability CL.
Testing
In an embodiment, the learner is tested using tests from a test bank to periodically assess her cognitive ability and update it in her profile. In various embodiments, the testing can be done during special practice sessions, at the beginning and/or end of content modules, or between two segments. The testing opportunities can be scheduled or randomly selected. The tests can be of question-answer type, multiple choice type, or any other type without limitation.
The test bank includes cognitive tests and subject matter tests. Cognitive tests perform psychometric testing of the learner, and are used to update the learner’s cognitive ability profile CL. On the other hand, subject matter tests test the learner’s understanding of the subject matter presented in one or more a segments, and are used to update the learner’s cognitive ability profile CL and/or prior knowledge as described later in this description.
In an embodiment, each test has an associated cognitive requirement profile CT representing the cognitive ability required to successfully complete the test. For M tests administered to learner L, the corresponding cognitive requirement profiles are denoted as CT[1] to CT[M], and the corresponding results for learner L as rL[1] to rL[M] where rL[i] is 1 if learner L successfully completed the ith test and 0 if she did not.
In an embodiment, learner L’s cognitive ability CL is updated as the result weighted average of the administered tests, as per the following formula:
C_L= (?_(i=1)^M¦?r_L [i] C_T [i] ?)/M … Eq. (4)
Similarly, CL can be updated as a result weighted moving average, cumulative moving average, exponential moving average, and so on, without limitation. It will be apparent to a person skilled in the art that a variety of techniques suitable for updating CL in view of the test results are well known, and more may be developed in the future. Any such suitable technique can be employed in conjunction with the present invention without deviating from its spirit and scope.
Accordingly, LPM 206 refines its assessment of the learner’s cognitive ability CL. Further, in an embodiment, the feedback data collected through learner activity tracking, learner feedback and testing is stored in a training data set and used for training machine learning algorithms as described later in this description.
At optional step 604, learner L’s prior knowledge record is maintained based on her activity and/or test results. An example knowledge record of learner L is presented in the table below:
Concept Confidence (K)
Mechanics.Momentum 10
Mechanics.Acceleration 10
2ndLaw.Definition1 10
2ndLaw.Definition2 10
2ndLaw.Proof 5
2ndLaw.SimpleNum1 8
2ndLaw.SimpleNum2 9
2ndLaw.SimpleNum3 2
2ndLaw.TrickyNum1 3
2ndLaw.TrickyNum2 2
2ndLaw.Application1 4
2ndLaw.Application2 1
Table 8 – Example knowledge record of a learner L
In the illustrated embodiment, the knowledge record lists various concepts known to learner L and a confidence value (K) for each of these concepts on a scale of 1 to 10, higher values indicating higher confidence. Confidence K of a learner L for a concept is denoted as KL[]. Thus, as per Table 8, KL[Mechanics.Momentum] = 10 implies that learner L knows the concept Mechanics.Momentum with a high degree of confidence. On the other hand, KL[2ndLaw.Application2] = 1 implies that learner L knows the concept 2ndLaw.Application2 with a low degree of confidence.
Whenever a concept is presented to learner L, it is added to her knowledge record with an initial confidence level. In an embodiment, the initial confidence level is assigned a default value (in this case 5). In an embodiment, the initial confidence level is increased or decreased based on the heuristic analysis of the learner’s activity while consuming content related to the concept. If the heuristic analysis indicates that the learner found the concept simple, a higher confidence value is assigned, and vice versa.
Further, the confidence value is suitably adjusted based on the learner’s results for tests related to the concept, with correct answers increasing the confidence and vice versa. In an embodiment, a short-term memory of test results is used for updating confidence values. Thus, if a learner fails at initial tests, but consistently succeeds thereafter, the initial failures do not keep the confidence value low for long.
FIG. 7 shows a method of creating a content module according to an embodiment of the present invention.
In various embodiments, the method described with reference to FIG. 7 is performed by ACC 208 in response to a request for a content module pertaining to a topic/concept and to be presented to a learner L. In an embodiment, the request is received from UIU 202.
At step 702, a topic poset is selected based on the request. In an embodiment, the topic is selected based on a concept address included in the request. For example, consider that a request is received for a content module for topic 2ndLaw for learner L, then topic poset 400Q is selected.
At optional step 704, optional concepts in topic poset 400Q, i.e. concepts tagged ‘Optional’, are considered for removal based on the learner profile of learner L. In an embodiment, optional concepts already known to learner L and/or optional concepts too simple for learner L are removed.
Known optional concepts are identified by the knowledge record of learner L, such as those where learner L already has a high confidence value. Thus, from topic poset 400Q of Table 6, for learner L with a knowledge record as shown in Table 8, the concepts of Mechanics.Momentum and Mechanics.Acceleration are removed since learner L has a high confidence level of 10 for these concepts, and they are optional concepts. On the other hand, even though learner L has a high confidence level for concepts 2ndLaw.Definition1 and 2ndLaw.Definition2, they are ‘Mandatory’ concepts for topic poset 400Q and therefore not removed.
Further, optional concepts that are too simple for learner L are identified by comparing the cognitive distance between learner L and the concept to a threshold, and removing any concepts where the cognitive requirement to understand the concept is less than the cognitive ability of learner L, and the distance between them is greater than a threshold. In an embodiment, for concepts associated with multiple segments, the cognitive requirement CS of the segment which is closest to the cognitive ability of learner L, CL, is considered as the cognitive requirement of the concept. E.g. although 2ndLaw.SimpleNum3 does not have a high confidence level for learner L, it is still removed for being too simple for learner L.
Concepts are removed from topic poset 400Q in the foregoing manner to obtain a reduced poset 400R as shown in the table below with the removed concepts shown as struck through.
Row Concept(s)
R(0) Mechanics.Momentum, Mechanics.Acceleration
R(1) 2ndLaw.Definition1, 2ndLaw.Definition2
R(2) 2ndLaw.Proof
R(3) 2ndLaw.SimpleNum1, 2ndLaw.SimpleNum2, 2ndLaw.SimpleNum3, 2ndLaw.TrickyNum1, 2ndLaw.TrickyNum2
R(4) 2ndLaw.Application1, 2ndLaw.Application2
Table 9 - Reduced poset 400R
Then at step 706, for each concept in reduced poset 400R, an associated segment is selected from among all associated segments of the concept (as recorded in the association chart described with reference to Table 7), and a segment poset 400S is generated as described below. In an embodiment, the selected segment is the one that has least cognitive distance from learner L.
Row Segment(s)
R(1) Definition1(1), Definition2(0)
R(2) Proof(0)
R(3) TrickyNum1(0), TrickyNum2(0)
R(4) Application1(1), Application2(0)
Table 10 – Segment poset 400S
Finally, at step 708, segment poset 400S is converted to a content module. In the illustrated embodiment, the content module is an ordered list of segment addresses. It will be apparent to a person skilled in the art that this list, once prepared, can be translated to various other forms such as, but not limited to, a video lecture, an audio lecture, a presentation, a playlist, and so on using well known techniques and without deviating from the spirit and scope of the present invention.
Segments from the first row of segment poset 400S are placed in the content module list, followed similarly by segments from the successive (i.e. second, then third, and so on) rows.
Segments in the same row are arranged in the content module in the order of ascending difficulty, descending interest, or a combination thereof.
In an embodiment, the difficulty of a segment is assessed from its associated cognitive requirement CS. The cognitive requirements of all segments in the same row are compared using cognitive comparison techniques described earlier, and the segments are arranged in the content module in increasing order of cognitive requirement.
In another embodiment, learner L’s prior knowledge of a segment is considered while determining how difficult the segment may be for her specifically. Thus, the difficulty of a segment is represented by an effective cognitive requirement CS|L of the segment S for learner L, which is calculated using the following formula:
C_(S|L)= C_S (1-K_(L|S)/K_MAX) … Eq. (5)
Where
CS|L is the effective cognitive requirement of segment S for learner L,
CS is the cognitive requirement of segment S,
KL|S is the confidence value of learner L for segment S, and
KMAX is the maximum possible confidence value on the confidence value scale.
Segments in the same row are then arranged in the content module in ascending order of CS/L.
In an embodiment, the interest rating of the segments in the same row is used to arrange them the descending order of interest. Thus, the more interesting segments are placed earlier in the content module to arouse the learner’s interest.
It will be apparent to a person skilled in the art that various combinations, variations, and modifications of the foregoing approaches are possible, and that any of them can be advantageously deployed in conjunction with the present invention without deviating from its spirit and scope.
Using the foregoing principles, the segments of segment poset 400S are arranged into a content module as shown in the table below:

Segment(s)
Definition1(1)
Definition2(0)
Proof(0)
TrickyNum2(0)
TrickyNum1(0)
Application2(0)
Application1(1)
Table 11 – An example content module
MACHINE LEARNING
As ACS 102 is deployed and used, it generates an appreciable amount of data based on generated content modules, learner feedback, learner test results, and learner activity. This data can be used for training one or more machine learning (ML) algorithms to augment or even replace some decision-making processes described above.
For example, in various embodiments, the ML techniques are used by ACS 102 to create a content module from available segments for a particular learner. Similarly, in an embodiment, ML techniques are used to update cognitive requirement CS of a segment S based at least in part on feedback obtained from one or more learners who have consumed segment S.
The AI/ML techniques include, without limitation, supervised learning, unsupervised learning, reinforcement learning, and rule-based learning.
ITEM 1: MATCHING A LEARNER WITH A SEGMENT
A neural network takes as input the cognitive ability (CL) and previous knowledge confidence level (KL) of a learner, and the cognitive requirements of a segment (CS, interchangeably referred to as CP in the ensuing description and/or associated drawings) and a topic of the segment (TS, interchangeably referred to as TP in the ensuing description and/or associated drawings), then ranks the given set of different segments according to the relevance of the segment.
Topic of a Segment (TS): Topic of a segment (TS) is a representation of the topic or concept with which the segment is associated. In an embodiment, TS is one hot encoded.
In alternative embodiments, additional features associated with a segment are encoded within TS. For example, additional features like sub-topic, chapter/section are encoded within TS.
CS already includes information about the segment and additional information/features of the segment encoded in TS help in matching by modelling influences of those additional features as well.
Collecting training data
Training data is collected by matching the learners with appropriate segments by manual rules like ‘CS=CL’ or ‘CS R1 -> 0.8
dot(Vq, Vo2) -> R2 -> 0.9
dot(Vq, Vo3) -> R3 -> 0.6
dot(Vq, Vo4) -> R4 -> 0.75
Here, the order of preference for that particular learner among those four options is {segment2, segment1, segment4, segment3}.
ITEM 2: INITIALIZING (AND/OR ADJUSTING) CS FROM CL AND FEEDBACK
A mechanism to estimate/adjust Cs with CL, KL, Ts and feedback of a learner after seeing a segment.
Data collection and data representation
We record cognitive abilities CL and knowledge level KL of a learner, Ts and Cs of a segment along with feedback for all the pairs where the feedback is ‘Good’. All the Cs and CL values are a vector of numbers where length of the vector is the number of different aspects of cognitive abilities.
Estimating Cs for a new video segment
We generate an estimate of Cs of a segment given CL and KL of a learner and Ts of a segment.
Cs Network (Neural network architecture)
FIG. 12 depicts CS (interchangeably referred to as Cp) network architecture. Cs Network is a 3 layer MLP with ReLU as an activation function between the layers.
Cs Network training
FIG. 13 depicts CS (interchangeably referred to as Cp) network training. One datasample: X(CL, KL, Ts) -> Y(Cs) [Note: The predicted Cs may also be viewed as the ‘effective’ CL, in the sense that given (1) a learner (with known ability and prior knowledge) and (2) a particular topic, what is the ‘effective’ ability of this learner vis-a-vis this topic. This can be averaged over multiple learners, to get an estimate of the ability required for the segment (the actual Cs, which is to be predicted/updated).
Cs network takes Cl, Kl and Ts of a segment and predicts Cs.
The loss used in training Cs network is the MSE loss between predicted CS and ground truth Cs.
Loss function = MSE(predicted Cs, ground truth Cs)
We train the Cs network with backpropagation using mini-batch stochastic gradient descent.
Cs Network inference
The trained Cs network takes CL, KL, and Ts and predicts the Cs value. In our method, each CL, KL and Ts will get an estimate of Cs.
To estimate the Cs of a new video, we take all the CL and KL values of a learner where the feedback is ‘Good’ and Ts of the segment and predict the Cs values with Cs network. We average all the Cs estimates from different CL and KL pairs to get the Cs value of the new video.
In case we are adjusting the Cs of the already existing video which has Cs value beforehand, we a value P which is the weightage of new Cs estimate that is of value between 0 and 1.
Then, the new value of Cs = (1-P) * (old Cs) + P * (Estimated Cs)
P is hyperparameter - can choose suitable values in range (0,1] and preferably in range (0.2, 0.6).
ITEM 3: ESTIMATING THE INTEREST FOR NEW LEARNER-SEGMENT PAIRS
A machine learning model for estimating interest of a learner for a particular segment is disclosed.
Data collection and data representation
For every interest poll feedback of learner on a particular segment, we collect CL and KL of the learner at that point of time, and Ts and Cs of the segment and the interest feedback which could be any integer between 0 to 10.
Cognitive requirements of the segment (Fp)
Quantitative measure of the requirements against different cognitive abilities. A set of numbers of fixed length across all the segments.
Interest Network (Neural network architecture)
FIG. 14 depicts an interest network. Interest Network is a 4 layer MLP with ReLU activation function. There could be a dropout or batchnorm layers in between the hidden layers.
Neural network training
Our neural network takes CL and KL of a learner, Ts and Cs of a segment to predict a scalar value.
Interest network is trained with Mean Squared Error loss between predicted interest Ip and the ground truth interest feedback Ig. See FIG. 15.
Loss function = MSE(Ip, Ig)
In alternative embodiments, other loss functions are used instead of MSE. For example, Mean Absolute Error (MAE)[9], Root Mean Squared Error (RMSE)[10] and smoother versions of MSE like Huber loss[11] and Log-Cosh loss[12] can be used.
Neural network inference
After the neural network is trained, the following is the process if a learner’s interest on a particular segment is to be inferred:
The Interest network takes CL and KL of a learner and Ts and Cs of a segment to predict a scalar value. This scalar value is the interest of the learner against the current segment.
Although the predicted values would be between 0 to 10 most of the time, if an input very different from the training set is given, the Interest network could possibly predict values out of 0 to 10. So, we have a post processing step which makes everything less than 0 as 0 and everything greater than 10 as 10. See FIG. 16.
ITEM 4: ESTIMATING THE FEEDBACK OF THE LEARNER
A machine learning model to estimate the feedback of a learner after seeing a segment.
Data collection and data representation
We record cognitive abilities CL and knowledge level KL of a learner, and Ts and Cs of a segment along with feedback Yf which could be {‘Hard’, ‘Good’, ‘Easy’}.
Estimating the feedback of learner for a segment
We generate probabilities of feedback being {‘Hard’, ‘Good’, ‘Easy’} of a segment given CL and KL of a learner, and Ts and Cs of a segment.
Feedback Network (Neural network architecture)
FIG. 17 depicts a feedback network architecture. Feedback Network is a 3 layer MLP with ReLU as an activation function between the layers.
Feedback Network training
FIG. 18 depicts a feedback network training.
One datasample: X(CL, KL, Ts, Cs) -> Y(Yf)
Feedback network takes CL and KL of a learner and Ts and Cs of a segment and outputs three values in the output layer. These values are converted into probabilities of feedback being {‘Hard’, ‘Good’, ‘Easy’} by softmax operation.
The loss used in training the Feedback network is the Negative Log Likelihood (NLL) loss (Loss Function typically used in multi-class classification problems) between predicted feedback Yp (after softmax operation) and ground truth label Yg.
Loss function = NLL(Yp, Yg)
We train the feedback network with backpropagation using mini-batch stochastic gradient descent. A model with highest accuracy on the validation set is taken. As an alternative ROC-AUC could be taken to mitigate the dataset imbalance problem.
In alternative embodiments, other loss functions are used instead of NLL. For example, instead of Negative LogLikelihood Loss (NLL) for this multi-class classification problem, we could use large-margin softmax (L-Softmax) [13] for which explicitly support inter-class separability and avoid overfitting and Leaky Expected Error Loss [14] which deals with noisy labels better.
Further, in alternative embodiments, upsampling/downsampling a particular class is used. Having only a few data points for a particular class, the model may not learn enough about that particular class and be biased towards other classes. (Here in our case, the ratio of data points where the feedback is ‘Hard’ or ‘Easy’ might be very low compared to ‘Good’.) So, to avoid this problem to some extent, we could upsample the data points of that particular class, i.e. we will duplicate the data points of a particular class with fewer data points.
In yet another alternative embodiment, a per-class weighted loss function is used. Instead of having equal weights for every class, we could have different weights for each class to penalize the model differently for mistakes in each class. Here, we could use this to tackle the dataset unbalance problem by penalizing the model for mistakes of the classes ‘Hard’ and ‘Easy’ more compared to ‘Good’.
This is accomplished with giving ‘weight’ argument in ‘torch.nn.NLLLoss’ class, as described in PyTorch documentation at https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html.
Feedback Network inference
FIG. 19 depicts a feedback network inference. The trained Feedback network takes CL, KL, and Ts and Cs to predict the probabilities of {‘Hard’, ‘Good’, ‘Easy’}.
ITEM 5: RECOMMENDING PERSONALIZED SEQUENCE OF SEGMENTS IN EACH ROW
We formulate recommending the right sequence as a Markov Decision process (MDP) and solve it by REINFORCE (on-policy policy gradient method) and/or by off-policy policy gradient method.
A Markov Decision Process is usually denoted by a tuple (S, A, P, R, ?), where
S is a set of states, A is a set of actions,
P(s,a,s0 ) = Pr[s0 |s, a] is the transition probability that action a in state s will lead to state s0
R(s, a) = E[r|s, a] is the expected reward that an agent will receive when the agent does action a in state s.
? ? [0, 1] is the discount factor representing the importance of future rewards
An MDP is solved by learning an optimal policy i.e knowing optimal action to take in a particular state, p(a|s). The policy is modeled with a parameterized function respect to ?, p?(a|s). Here, the parameterized function is a variant of RNN, GRU or LSTM, which takes a state and predicts a probability of each action.
Data representation and problem formulation
The segment is represented by Cs and the one-hot encoding of the concept of the segment. The learner is represented by CL and KL.
Reinforcement learning Methods
One approach for the on-policy learning method (REINFORCE) is described in Williams, R. J. (1988). Toward a theory of reinforcement-learning connectionist systems. Technical Report NU-CCS-88-3, Northeastern University, College of Computer Science, and its application herein is described below.
Initialize the policy parameter ?, RNN at random.
Generate one trajectory on policy p?: S1, A1, R2, S2, A2, …, ST.
For t = 1, 2, … , T:
Estimate the the return Gt;
Update policy parameters in RNN with gradient of objective function J(?)
Where Q is the action value function.
One approach for the off-policy learning method (Off-policy actor-critic) is described in Thomas Degris, Martha White, and Richard S. Sutton. “Off-policy actor-critic.” ICML 2012, and its application herein is described below.
The off-policy approach does not require full trajectories and can reuse any past episodes (“experience replay”) for much better sample efficiency.
The sample collection follows a behavior policy beta different from the target policy, bringing better exploration.
The gradient update for the RNN is determined by the formula:

Where ??(a/s) is the policy the learning agent started with.
In alternative embodiments, various different policy gradient methods are used. With the same formulation of state and actions, we could use different policy gradient methods, both off-policy and on-policy like DDPG[17], TRPO[18], PPO[19], ACER[20], ACTKR[21], SAC[22], SVPG[23], IMPALA[24].
Reinforcement learning training
FIG. 20 depicts a reinforcement learning training. The definitions of states, actions, transition function, reward and discount factor as inputs, outputs and loss function in training an RNN policy. (depicted in the fig below)
Problem specific definitions
State: State contains CL and KL and the previously recommended segments of a particular row. This is continuous and encoded by an RNN as hi and input Cli, Kli, Csi, Kpi at each step where we have to take an action.
Action: Action at a particular state, P(a/s), is modeled by the prediction of the RNN at each step, i.e the appropriate Cs predicted of the next segment predicted at each time step. A segment having Cs nearest to the predicted Cs is taken as the recommendation at that step.
Transition function: Transition function from a particular state, s, with a particular action, a, is modeled by RNN processing step by step. I.e transitioning from (hi, Cli, Kli, Csi, Kpi) -> (h(i+1), Cl(i+1), Kl(i+1), Cs(i+1), Kp(i+1)). [Not important in our case because we are doing “model free” learning.]
Reward: Reward function is the integer reward quantized from the feedback of the learner at a particular point of time after seeing a segment. At a particular step if we give a recommendation r, these are (in points): skip = -10, repeat= -10, survey: {good=+10, hard=-10, easy=-10}, Testing the learner after the sequence of segments and taking reward as integer between 1 to 10 based on the learner’s understanding.
Discount factor, ? ? [0, 1], is set to 0.9 as per standard practice, which is used to calculate return, Gt, of a particular state and action.
Here, Cs dash is the predicted Cs at each step and (Cs, Ts) are the Cs and Ts of the segment with nearest Cs.
Notes:
Sequence for a particular learner. Each learner will have his/her own training sequence.
ri = reward at step i
Cl, kl, - for that learner
Csi, Tsi - For the ith segment shown
RNN is refined with each step
hi = hidden state at step i which has information of all the items that came before (internal parameter)
Output of ith step: barCsi = estimated Cs relevant at that step (or the effective Cl) which is used to get the next segment (based on the candidate having Cs closest to barCsi)
Inference
FIG. 21 depicts a reinforcement learning inference. The learned policy, the RNN would give the next recommended segment when given the CL and KL of the learner and the previous recommended segments.
General details which apply to all the machine learning algorithms above
In all the machine learning models, the dataset is divided into train set, validation set and test set. Ratios of train, validation and test is taken as 6:2:2 but could be modified as preferred. Any hyperparameters or alternatives specified are chosen according to the model’s performance on the validation set being trained on the train set.
After choosing the hyper-parameters, the model performing best on the test set is taken as the trained model. The metric evaluated on the test set is the same as the loss function used for training unless otherwise specified. (In Item 4, for feedback prediction, the metric on the test set is accuracy or ROC-AUC as specified in Item 4).
Transforming 3D matrix into a sequence of segments
Step 1: Transforming the 3D matrix into a 2D matrix
Item 1 (Matching Segment to Learner) is used to rank the different variants of the concept according to relevancy and the top one is selected to make 3D matrix into 2D matrix
Step 2: Ignoring some of the optional videos (sometimes even videos labelled ‘must’)
Item 4 (Estimating the Feedback of a learner - similar to interest but we are predicting feedback) is used to estimate the feedback of the learner on all the segments in a matrix and segments which had feedback of ‘easy’ with high confidence are removed. We can have different confidence thresholds when removing a segment from optional and removing a segment from segments labelled must.
Step 3: Transforming the 2D matrix into a sequence of segments
Item 2 - Estimating Cs
And
Item 3 - Estimating the Interest.
Every item in the previous row should come before every item in the next row.
We will use the procedure described in the 891 Application.
If K=0 for a learner, we will order all the items in according to the decreasing order of the Cs of the segment. We will use the adjusted Cs which is sensitive to the learner preferences according to the Item 2 described here.
If K is not equal to 0, then Cs is adjusted according to the rule presented in the 891 Application.
Note: These estimates are used for sequencing as described in the 891 Application. Difference in this approach is that instead of Cs (as described in 891 Application), this approach uses effective Cl/predicted Cs predicted as per Item 2.
In an alternative embodiment, we will use an end to end learning model based on the feedback of the learner such as skip, rewind, results of the survey and test and the end of the segment or at the end of a single row which is presented as Item 5 here.
EXAMPLE VARIATIONS AND MODIFICATIONS
The present invention is not to be limited in scope by the specific embodiments described herein. It is fully contemplated that other various embodiments of and modifications to the present invention, in addition to those described herein, will become apparent to those of ordinary skill in the art from the foregoing description and accompanying drawings. Thus, such other embodiments and modifications are intended to fall within the scope of the following appended claims.
For example, AS 116 can be embodied in a local machine, a cloud-based server, a virtual server, a group of servers collectively performing the functionality of AS 116 as disclosed herein, and/or various combinations of the foregoing without limitation.
Similarly, DB 118 can be embodied in a relational database, an eXtensible Markup Language (XML) database, a graph database, an object-oriented database, a distributed database, a centralized database, a cloud database, and/or various combinations of the foregoing without limitation.
User devices 110, 112, and 114 can include, without limitation, desktop computers, laptop computers, smart phones, personal digital assistants (PDAs), smart televisions, dedicated devices, kiosks, and/or any other suitable device or combination of devices capable of implementing functionality as described herein. In an embodiment, user devices 110, 112, and 114 allow their respective users to interact with ACS 102 via generic software such as web browsers, media players, and the like.
In an embodiment, one or more of user devices 110, 112, and 114 include a dedicated application developed for allowing the respective users to interact with ACS 102. Further, in an embodiment, the functionality of UIU 202 is distributed between AS 116 and a user device running the dedicated application.
Various actions attributed to teacher 108 in this description can be performed by a group of one or more teachers and/or one or more system administrators of ACS 102. It will be apparent to a person skilled in the art that members of said group can be assigned role-based access rights that permit them to perform only some actions of teacher 108 but not others.
User devices 110, 112, 114, AS 116, and/or DB 118 can interact over a wired and/or wireless network, a local area network (LAN), a wide area network (WAN), the Internet, the world-wide web (WWW), and/or any other suitable network or combination of networks.
CC 210 can reside in a single database or across multiple different databases, for example, across databases provided by different educational content providers.
It will be apparent to a person skilled in the art that the teachings of the present invention can be reduced to practice in any number of different ways with such modifications as are presently known in the art and/or developed in the future with advancements in information technology, computing, human computer interface, without deviating from the spirit and scope of the present invention.
APPLICATIONS
While the teachings of the present invention are described herein with reference to providing online educational content, it will be apparent to a person skilled in the art that these teachings may be advantageously applied in other contexts as well without deviating from the spirit and scope of the present invention. For example, they can be used to provide adaptive content for professional training courses, corporate training courses, professional coaching content, online documentation and manuals, e.g. documentation of a programming language, and so on.
Further, although the present invention has been described herein in the context of particular embodiments and implementations and applications and examples and in particular environments, those of ordinary skill in the art will appreciate that its usefulness is not limited thereto and that the present invention can be beneficially applied in any number of ways and environments for any number of purposes. Accordingly, the claims set forth below should be construed in view of the full breadth and spirit of the present invention as disclosed herein.
REFERENCES
[1] Qian, N. (1999). On the momentum term in gradient descent learning algorithms. Neural Networks : The Official Journal of the International Neural Network Society, 12(1), 145–151. http://doi.org/10.1016/S0893-6080(98)00116-6
[2] Nesterov, Y. (1983). A method for unconstrained convex minimization 15 problem with the rate of convergence o(1/k2). Doklady ANSSSR (translated as Soviet.Math.Docl.), vol. 269, pp. 543– 547.
[3] Duchi, J., Hazan, E., & Singer, Y. (2011). Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. Journal of Machine Learning Research, 12, 2121–2159. Retrieved from 20 http://jmlr.org/papers/v12/duchi11a.html
[4] Zeiler, M. D. (2012). ADADELTA: An Adaptive Learning Rate Method. Retrieved from http://arxiv.org/abs/1212.5701
[5] Kingma, D. P., & Ba, J. L. (2015). Adam: a Method for Stochastic Optimization. International Conference on Learning Representations, 1–25 13.
[6] Dozat, T. (2016). Incorporating Nesterov Momentum into Adam. ICLR Workshop, (1), 2013–2016
[7] Loshchilov, I., & Hutter, F. (2019). Decoupled Weight Decay Regularization. In Proceedings of ICLR 2019
[8] Ma, J., & Yarats, D. (2019). Quasi-hyperbolic momentum and Adam 5 for deep learning. In Proceedings of ICLR 2019.
[9] https://en.wikipedia.org/wiki/Mean_absolute_error
[10] https://en.wikipedia.org/wiki/Root-mean-square_deviation
[11] Huber, Peter J. (1964). "Robust Estimation of a Location Parameter". Annals of Statistics. 53 (1): 73–101. doi:10.1214/aoms/1177703732. 10 JSTOR 2238020.
[12] R. Neuneier and H. G. Zimmermann. How to train neural networks. In Neural Networks: Tricks of the Trade. 1998.
[13] Liu, W., Wen, Y., Yu, Z. and Yang, M., 2016, June. Large-margin softmax loss for convolutional neural networks. In ICML (Vol. 2, No. 3, p. 15 7).
[14] Irsoy, Ozan. “On Expected Accuracy.” ArXiv abs/1905.00448 (2019): n. Pag.
[15] Williams, R. J. (1988). Toward a theory of reinforcement-learning connectionist systems. Technical Report NU-CCS-88-3, Northeastern University, College of Computer Science.
[16] Thomas Degris, Martha White, and Richard S. Sutton. “Off-policy actor-critic.” ICML 2012.
[17] Timothy P. Lillicrap, et al. “Continuous control with deep reinforcement learning.” arXiv preprint arXiv:1509.02971 (2015).
[18] John Schulman, et al. “Trust region policy optimization.” ICML. 2015.
[19] Schulman, J., Wolski, F., Dhariwal, P., Radford, A. and Klimov, O., 2017. Proximal policy optimization algorithms. arXiv preprint 5 arXiv:1707.06347.
[20] Ziyu Wang, et al. “Sample efficient actor-critic with experience replay.” ICLR 2017.
[21] Yuhuai Wu, et al. “Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation.” NIPS. 10 2017.
[22] Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. “Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor.” arXiv preprint arXiv:1801.01290 (2018). 15
[23] Yang Liu, et al. “Stein variational policy gradient.” arXiv preprint arXiv:1704.02399 (2017).
[24] Lasse Espeholt, et al. “IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures” arXiv preprint 1802.01561 (2018).
,CLAIMS:CLAIMS
I/we claim:
1. A method of providing content adapted for a learner, the method comprising:
storing a profile of the learner in a learner profile database (118), wherein the profile includes information about the learner’s cognitive ability and/or previous knowledge;
storing educational/training content in a content collection (210) from which at least one content module is derived for the learner;
providing a content collection manager (204) enabling a teacher (108) to manage the content collection (210);
providing a learner profile manager (206) for creating and updating 15 the learner profile database (212); and
providing an adaptive content creator (208) which, in response to a request from user interaction unit (212) for creating a content module for the learner, derives a content module from content collection (210) based on the learner’s profile stored in learner profile database (212) using a machine learning algorithm.
2. The method of claim 1, wherein the machine learning algorithm uses reinforcement learning.
3. The method of claim 1, wherein the machine learning algorithm matches a learner to a content segment in content collection (210).
4. The method of claim 1, wherein the machine learning algorithm estimates the interest for new learner-segment pairs.
5. The method of claim 1, wherein the machine learning algorithm estimates the feedback of a learner.
6. The method of claim 1, wherein the machine learning algorithm recommends a personalized sequence of segments in each row.

Documents

Application Documents

# Name Date
1 202021051427-PROVISIONAL SPECIFICATION [25-11-2020(online)].pdf 2020-11-25
2 202021051427-PROOF OF RIGHT [25-11-2020(online)].pdf 2020-11-25
3 202021051427-POWER OF AUTHORITY [25-11-2020(online)].pdf 2020-11-25
4 202021051427-OTHERS [25-11-2020(online)].pdf 2020-11-25
5 202021051427-FORM FOR STARTUP [25-11-2020(online)].pdf 2020-11-25
6 202021051427-FORM FOR SMALL ENTITY(FORM-28) [25-11-2020(online)].pdf 2020-11-25
7 202021051427-FORM 1 [25-11-2020(online)].pdf 2020-11-25
8 202021051427-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [25-11-2020(online)].pdf 2020-11-25
9 202021051427-EVIDENCE FOR REGISTRATION UNDER SSI [25-11-2020(online)].pdf 2020-11-25
10 202021051427-DRAWINGS [25-11-2020(online)].pdf 2020-11-25
11 202021051427-DRAWING [24-11-2021(online)].pdf 2021-11-24
12 202021051427-COMPLETE SPECIFICATION [24-11-2021(online)].pdf 2021-11-24
13 Abstract1.jpg 2022-04-18
14 202021051427-RELEVANT DOCUMENTS [21-05-2025(online)].pdf 2025-05-21
15 202021051427-POA [21-05-2025(online)].pdf 2025-05-21
16 202021051427-Form-4 u-r 138 [21-05-2025(online)].pdf 2025-05-21
17 202021051427-FORM 18 [21-05-2025(online)].pdf 2025-05-21
18 202021051427-FORM 13 [21-05-2025(online)].pdf 2025-05-21
19 202021051427-PA [29-08-2025(online)].pdf 2025-08-29
20 202021051427-ASSIGNMENT DOCUMENTS [29-08-2025(online)].pdf 2025-08-29
21 202021051427-8(i)-Substitution-Change Of Applicant - Form 6 [29-08-2025(online)].pdf 2025-08-29