Sign In to Follow Application
View All Documents & Correspondence

System And Method For Content Affinity Analytics

Abstract: System and Method for content affinity analytics using graph and exchange of affined learning data across learning management systems (LMS), are provided. The present invention relates to a system and method for performing content affinity analytics in learning management systems such as assessment management systems (AMS) / content management systems (CMS) / learning management system (LMS). The present invention comprises of a Data Capture Service Module, Affinity Analytics Module and a Reporting Module. The Data Capture Service Module models the data management entities into a connected graph. The Affinity Analytics Module identifies the affinity data and creates/updates the relations among the nodes. Further, the Reporting Module exposes APIs which queries the graph and returns data in a JSON format.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
10 October 2016
Publication Number
15/2018
Publication Type
INA
Invention Field
ELECTRONICS
Status
Email
ip@legasis.in
Parent Application
Patent Number
Legal Status
Grant Date
2024-04-24
Renewal Date

Applicants

Tata Consultancy Services Limited
Nirmal Building, 9th Floor, Nariman Point, Mumbai-400021, Maharashtra, India

Inventors

1. BANERJEE, Sujoy
Tata Consultancy Services Limited, Garia Station Road, Kalitala, Koyalpara, Kolkata- 700084, West Bengal, India

Specification

Claims:
1) A hardware processor implemented method for performing analytics on learning content, the method comprising:
populating, by a data capturing module, learning data into a source table in a relational database used as an intermediate data store;
filtering, by an incremental data filtering module, initiated by triggers in the source table, that populates a destination table that stores only incremental learning data, within the same relational database;
capturing, by a changed data capturing module, the incremental learning data from said incremental data store and push the incremental learning data into a data queue;
transforming, by a graph transformation module, the incremental learning data from said data queue to a graph form;
storing, by a graph data access module, the incremental learning data in graph form into a graph database, wherein the graph database comprises of learning data as a plurality of incremental learning data, wherein the learning data comprises of a parent-child hierarchy of learning nodes and edges between the learning nodes;
analyzing, by an affinity analytics module, the incremental learning data in the graph database to establish relationships represented by said edges between the learning nodes, wherein the relationship is based on the parent-child hierarchy of the learning nodes;
creating, by the affinity analytics module, attributes of the edge of relationships by establishing facts about the relationship between learning nodes and affinity insight among the learning nodes;
receiving, through a reporting module, an input data from a user;
generating, by the reporting module, a set of recommended questions based on affinity question graph and input data;
sending comprehensive learning data across multiple learning management systems.

2) The method of claim 1 wherein the learning data comprises of learning nodes representing books, assignments, questions and learning objectives.
3) The method of claim 2 wherein the questions are texts establish the learning objectives comprising of attributes Question Title, Question Weight, Question Source System and Question ID.
4) The method of claim 2 wherein the assignments are tasks allocated to an assignee as part of a course of study and wherein measurement of assignments are achieved by a plurality of questions comprising of attributes Assignment Id, Assignment Title, Assignment Description, Assignment Start Date, Assignment End Date and Assignment Author ID.
5) The method of claim 2 wherein the books are representations of subject matter described through a combination of textual representations, illustrations, audio, and video and comprising of attributes Author, ISBN (International Book Number), Edition and Publication.
6) The method of claim 1 wherein the affinity among any two of the learning nodes is established by determining the number of common parent nodes shared by the two learning nodes.
7) The method of claim 1 wherein the input data from a user comprises of selection of one of a plurality of questions in a course content through a user interface within a learning management system.
8) The method of claim 1 further comprises of sending the set of recommended questions over a learning measurement framework for sharing with multiple user groups.
9) A system for performing analytics on learning content, the system comprising:
a memory storing instructions
a processor communicatively coupled to said memory, wherein the said processor is configured by said instructions to:
populate, by a data capturing module, learning data into a source table in a relational database used as an intermediate data store;
filter, by an incremental data filtering module, initiated by triggers in the source table, that populates a destination table that stores only incremental learning data, within the same relational database;
capture, by a changed data capturing module, the incremental learning data from said incremental data store and push the incremental learning data into a data queue;
transform, by a graph transformation module, the incremental learning data from said data queue to a graph form;
store, by a graph data access module, the incremental learning data in graph form into a graph database, wherein the graph database comprises of learning data as a plurality of incremental learning data, wherein the learning data comprises of a parent-child hierarchy of learning nodes and edges between the learning nodes;
analyze, by an affinity analytics module, the incremental learning data in the graph database to establish relationships represented by said edges between the learning nodes, wherein the relationship is based on the parent-child hierarchy of the learning nodes;
create, by the affinity analytics module, attributes of the edge of relationships by establishing facts about the relationship between learning nodes and affinity insight among the learning nodes;
receive, through a reporting module, an input data from a user;
generate, by the reporting module, a set of recommended questions based on affinity question graph and input data.
send comprehensive learning data across multiple learning management systems.
, Description:
FORM 2

THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003

COMPLETE SPECIFICATION
(See Section 10 and Rule 13)

Title of invention:
SYSTEM AND METHOD FOR CONTENT AFFINITY ANALYTICS

Applicant:
Tata Consultancy Services Limited
A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th floor,
Nariman point, Mumbai 400021,
Maharashtra, India

The following specification particularly describes the embodiments and the manner in which it is to be performed.
TECHNICAL FIELD
[001] The embodiments herein relate to a system and method for performing content affinity analytics in a plurality data management systems such as assessment management systems (AMS) / Content management systems (CMS) / Learning management system (LMS). These embodiments additionally and particularly relate to a system and method for complex analytics of substantial volumes of data in order to make the analytics results available to a network of data management systems.

BACKGROUND
[002] Although content of data is pervasive in different types of AMS/CMS/LMS, a convenient method of extracting contextual insights about either reusability or the cohesiveness of content is not available in an efficient form in existing LMS/CMS/AMS. Further a method/technique to predict the degree of reuse or togetherness of the content in future usages are not developed sufficiently to meet existing demands in this stream. Currently systems exist where content categorization and searching is performed in more of a streamlined fashion, however, the reusability study and predictive study have not been done to a reasonable extent on these systems. Furthermore, conventional techniques are not developed to an extent where such reuse or togetherness data can be exchanged between two or more AMS/CMS/LMSs.
[003] An existing technique describes a method for tracing variation of concept knowledge of learners over time and evaluating content organization of learning resources used by the learners. The method jointly traces a latent learner concept knowledge and simultaneously estimates the quality and content organization of the corresponding learning resources (such as textbook sections or lecture videos), and the questions in assessment sets. The existing technique further discloses a block multi-convex optimization-based algorithms that estimate all the learner concept knowledge state transition parameters of learning resources and questions—concept associations and their respective intrinsic difficulties. This technique mainly focuses on the course content evolutions over a period of time after analyzing learners’ outcomes. Furthermore the technique primarily focusses on the usage history of content pattern based upon their reuse by instructors not directly depending upon learners’ outcome. The togetherness of content has not been the prime focus of association, rather learners’ outcome is used mainly to determine the association.
[004] Another existing technique discloses a method for analyzing, querying, and mining graph databases using sub graph and similarity querying. The technique describes a tree-based index called Closure-tree, or C-tree. Each node in the tree contains discriminative information about its respective descendants in order to facilitate effective pruning. This summary information is represented as a graph closure, a “bounding box” of the structural information of the constituent graphs. The C-tree supports both subgraph queries and similarity queries on various kinds of graphs. The graph closures capture the entire structure of constituent graphs, which implies high pruning rates. This technique therefore uses a graph database as a graph database provides a reasonable performance for such type of analytics. This technique does not focus on a new graph optimizing technique.
[005] Another existing technique provides a relationship graph of nodes representing entities and edges representing relationships among the entities which is then searched to find a match on search criteria, such as an individual or a company. This technique uses a graph database as a graph database is supposed to provide a reasonable performance for such type of analytics. However, this technique lacks focus on developing a traversal technique
[006] Yet another existing technique provides an e-learning system and methodology structures content so that the content is reusable and flexible. For example, the content structure allows the creator of a course to reuse existing content to create new or additional courses. In addition, the content structure provides flexible content delivery that may be adapted to the learning styles of different learners. This technique focusses on finding out the association between a learning objective and their associate content, however, this technique is primarily focused on finding out the association between the questions and how these questions traverse in clusters overtime for teaching the same learning objective by instructors.
SUMMARY
[007] The following presents a simplified summary of some embodiments of the disclosure in order to provide a basic understanding of the embodiments. This summary is not an extensive overview of the embodiments. It is not intended to identify key/critical elements of the embodiments or to delineate the scope of the embodiments. This summary’s sole purpose is to present some embodiments in a simplified form as a prelude to the more detailed description that is presented below.
[008] In view of the foregoing, an aspect herein provides a hardware processor implemented method for performing analytics on learning content, the method comprising populating, by a data capturing module, learning data into a source table in a relational database used as an intermediate data store, filtering by an incremental data filtering module, initiated by triggers in the source table, that populates a destination table that stores only incremental learning data, within the same relational database, capturing, by a changed data capturing module, the incremental learning data from said incremental data store and push the incremental learning data into a data queue, transforming, by a graph transformation module, the incremental learning data from said data queue to a graph form, storing, by a graph data access module, the incremental learning data in graph form into a graph database, wherein the graph database comprises of learning data as a plurality of incremental learning data, wherein the learning data comprises of a parent-child hierarchy of learning nodes and edges between the learning nodes, analyzing, by an affinity analytics module, the incremental learning data in the graph database to establish relationships represented by said edges between the learning nodes, wherein the relationship is based on the parent-child hierarchy of the learning nodes, creating, by the affinity analytics module, attributes of the edge of relationships by establishing facts about the relationship between learning nodes and affinity insight among the learning nodes, receiving, through a reporting module, an input data from a user, generating, by the reporting module, a set of recommended questions based on affinity question graph and input data and finally sending comprehensive learning data across multiple learning management systems.
[009] In another aspect, there is provided a system for performing analytics on learning content, the system comprising: a memory storing instructions, a processor communicatively coupled to said memory, wherein the said processor is configured by said instructions to populate, by a data capturing module, learning data into a source table in a relational database used as an intermediate data store, filter, by an incremental data filtering module, initiated by triggers in the source table, the incremental learning data coming into the source table, that populates a destination table which stores only incremental learning data within the same relational database, capture, by a changed data capturing module, the incremental learning data from said incremental data store and push the incremental learning data into a data queue, transform, by a graph transformation module, the incremental learning data from said data queue to a graph form; store, by a graph data access module, the incremental learning data in graph form into a graph database, wherein the graph database comprises of learning data as a plurality of incremental learning data, wherein the learning data comprises of a parent-child hierarchy of learning nodes and edges between the learning nodes, analyze, by an affinity analytics module, the incremental learning data in the graph database to establish relationships represented by said edges between the learning nodes, wherein the relationship is based on the parent-child hierarchy of the learning nodes, create, by the affinity analytics module, attributes of the edge of relationships by establishing facts about the relationship between learning nodes and affinity insight among the learning nodes, receive, through a reporting module, an input data from a user, generate, by the reporting module, a set of recommended questions based on affinity question graph and input data and finally send comprehensive learning data across multiple learning management systems.

BRIEF DESCRIPTION OF THE DRAWINGS
[010] The embodiments herein will be better understood from the following detailed description with reference to the drawings, in which:
[011] FIG 1 illustrates a block diagram of a content affinity system for performing content affinity analytics, in accordance with an example embodiment of the present disclosure;
[012] FIG 2 illustrates a representative system diagram of the content affinity analytics system, in accordance with an example embodiment of the present disclosure;
[013] FIG 3 illustrates a representative system diagram describing the various functionalities for capturing learning data in accordance with an example embodiment of the present disclosure;
[014] FIG 4 illustrates a representative system diagram for performing affinity analytics on the learning data stored in graph form in the graph database, in accordance with an example embodiment of the present disclosure;.
[015] FIG 5 illustrates a representative system diagram of a data interface for the purpose of accepting inputs from an invoker and reporting the recommendations of the affinity analytics, in accordance with an example embodiment of the present disclosure;.
[016] FIG 6 illustrates a representative logical diagram for the organization of data in the graph database in the form of nodes and learning objectives and the corresponding relationships, in accordance with an example embodiment of the present disclosure;.
[017] FIG 7 illustrates a representative logical diagram for the affinity analytics on the nodes of the graph representing learning components comprising of books, assignments, questions and learning objectives, in accordance with an example embodiment of the present disclosure;
[018] FIG 8 illustrates a representative block diagram of the entities relating to each of said learning components and their mutual relationships and cardinalities, in accordance with an example embodiment of the present disclosure;
[019] FIG 9 illustrates a representative block diagram of a learning data exchange platform for data exchange across a plurality of learning and assessment systems, in accordance with an example embodiment of the present disclosure; and
[020] FIG 10 illustrates a representative block diagram of an embodiment of learning data exchange platform for data exchange across various learning and assessment systems, in accordance with an example embodiment of the present disclosure.

DETAILED DESCRIPTION OF EMBODIMENTS
[021] The embodiments herein and the various features and details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
[022] The words “comprising,” “having,” “containing,” and “including,” and other forms thereof, are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items.
[023] It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise. Although any systems and methods similar or equivalent to those described herein can be used in the practice or testing of embodiments of the present disclosure, the preferred, systems and methods are now described.
[024] Some embodiments of this disclosure, illustrating all its features, will now be discussed in detail. The disclosed embodiments are merely exemplary of the disclosure, which may be embodied in various forms.
[025] Before setting forth the detailed explanation, it is noted that all of the discussion below, regardless of the particular implementation being described, is exemplary in nature, rather than limiting. For example, although selected aspects, features, or components of the implementations are depicted as being stored in memories, all or part of the systems and methods consistent with the Content Affinity Analytics system and method may be stored on, distributed across, or read from other machine-readable media.
[026] FIG. 1 is a block diagram of a Content Affinity Analytics system 100 according to an embodiment of the present disclosure. The Content Affinity Analytics system 100 comprises a memory 102, a hardware processor 104, and an input/output (I/O) interface 106. The memory 102 further includes one or more modules 108 (or modules 108). The memory 102, the hardware processor 104, the input/output (I/O) interface 106, and/or the modules 108 may be coupled by a system bus or a similar mechanism.
[027] The memory 102, stores instructions, number of pieces of information, and data, used by a computer system, for example Content Affinity Analytics system 100 to implement the functions (or embodiments) of the present disclosure. The memory 102 may include for example, volatile memory and/or non-volatile memory. Examples of volatile memory may include, but are not limited to volatile random access memory (RAM). The non-volatile memory may additionally or alternatively comprise an electrically erasable programmable read only memory (EEPROM), flash memory, hard drive, or the like. Some examples of the volatile memory includes, but are not limited to, random access memory, dynamic random access memory, static random access memory, and the like. Some example of the non-volatile memory includes, but are not limited to, hard disks, magnetic tapes, optical disks, programmable read only memory, erasable programmable read only memory, electrically erasable programmable read only memory, flash memory, and the like. The memory 102 may be configured to store information, data, applications, instructions or the like for enabling the sensor data fusing system 100 to carry out various functions in accordance with various example embodiments.
[028] The hardware processor 104 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Further, the hardware processor 104 may comprise a multi-core architecture. Among other capabilities, the hardware processor 104 is configured to fetch and execute computer-readable instructions or modules stored in the memory 102.
[029] Fig 2 depicts the overall schematic view of the system 100 wherein the AMS/LMS component 201 is responsible for execution of actual use cases of AMS/LMS. The AMS/LMS component 201 assists educators to choose a book, create questions, group them into assignment and then distribute to the students. The data for the AMS/LMS module is stored in 306 a traditional Entity-Relationship (ER) database depicted here as the SQL DB. By the Data Capture Services Module 203 AMS/LMS data is collected in real time, transformed into a graph model and stored in a graph database 322 depicted here as Graph DB. The Affinity Analytics Module 205 is a daily batch process that analyses the data in the Graph DB 322 and executes the affinity algorithm that generates affinity data based on the contents of the Graph DB 322 and stores the additional affinity data within the nodes and edges of the Graph DB 322. This arrangement has been made to eliminate the need for execution of the affinity algorithm during any online query, as the data volume is likely to be substantial thereby making the likelihood of a quick query turnaround unlikely. The Reporting Module 204 receives input data from an invoker as questions, executing graph queries on the Graph DB 322 based on the input and sending the data back to the invoker. In one embodiment the data may be sent as a JSON response (Java Script Object Notation, which is a format that uses human-readable text to transmit data objects) to the invoker and REST service interfaces (Representational State Transfer, which is a communications protocol) may be used for this data input and output.
[030] Now referring to Fig 3, the Data Capture Service Module as depicted represents a method wherein real time data is collected from the SQL DB 306, transformed into Graph Entity-Relationship model and persists in the Graph DB 322. In one embodiment, use of techniques like Oracle Advanced Queue can guarantee that the insertion order is preserved and retrieval is done in a sequential order. In this embodiment, one of the graph databases in the market called Neo4J is employed as the graph database, but technically any graph database may be implemented. Building the Affinity data using Graph DB is much more optimal than using relational / No-SQL Tabular or document oriented structure. Retrieving data from the Graph DB 322 is a constant time operation. The Affinity data can be rebuilt quickly using Graph DB 322 and much more efficiently performing in nature than relational databases.
Steps are described here:
1. The data interface 304 to the SQL DB 306 comprises of the Learning Events Capturing Module 300 and the Changed Data Capturing Module 302.
2. Instructors create / update assessment items and these assessment items are captured by a Learning Events Capturing Module 300. These creation / updates result in learning events being generated which get persisted in the source table 308.
3. Triggers 310 are activated, that initiates a filtering module 311 to capture the incremental data and these incremental data are then stored in the destination table 312.
4. Changed Data Capturing Module 302 extracts data from the destination table 312 and puts in a queue 314. This queue arrangement is performed for the purpose of loosely coupling the ETL (Extract Transform Load) Module 315 from the original system comprising of the SQL DB 306 and Data Interface Module 304.
5. Graph Transformation Module 318 reads data from the queue whenever data is available and transforms the data into a graph data structure (that which can be stored in a Graph database).
6. Graph Data Access Module 320 then reads the graph data structure and stores the graph data into the Graph DB 322.
[031] The Affinity Analytics Module 404 as depicted in Fig 4 executes the affinity logic on the Graph DB 322 and also populates new relationships / attributes into the same Graph DB. To generate an effective learning content, the historical data about the usage of the content is utilized. Aggregation of such learning content is considered to be a learning plan. Usage history of the learning content as well as fitment history of the learning content within the learning plan is used to generate affinity insights. In this invention, a graph model has been used to model the entities and the edge between the nodes represents a relationship of those two entities. Relationship details are maintained as attributes of the edge. This Affinity Analytics module 404 performs the following steps:
1. Association discovery: Finds learning components that imply the presence of other content in the same category / event. A periodic batch runs in the graph database 322 which initially extracts insights about content association like content reuse and content cohesion 401.
2. Pattern discovery: Build affinity relationships between learning components, which generates the affinity relationship and also computes the affinity weight of each newly created relationship 402.
3. Time sequence discovery: Finds patterns between events such that the presence of one set of items is followed by another set of items in a set of events over a period of time. The time sequence discovery keeps on generating the recommendations based upon content selection and content affinity graph that eventually results in generation of recommendation to determine a question cluster 403.
[032] The Reporting Module as depicted in Fig 5 is responsible for receiving input data from a human invoker through a web interface as questions, executing graph queries on the underlying graph database Graph DB 322 based on the input data, and for sending the resultant data as an affined group of questions back to the invoker again through the web interface.
[033] In one embodiment the data may be sent as a JSON response to the invoker and REST API (Application Program Interface) service interfaces may be used for this data interface and transfer.
Steps for the above mentioned embodiments are described here:
1. REST API Layer 501: REST API presents the input data in JSON format, which flows into the Graph to JSON Transformation Layer 502. This layer also acts as the output interface layer for data coming in JSON format from the Graph to JSON Transformation Layer.
2. Graph to JSON Transformation Layer 502: Transformation takes place to convert JSON to Graph format and Graph to JSON format for interfacing with the Graph Data Access Layer.
3. Graph Data Access Layer 503: This layer interacts with the Graph DB 322 and fetches data for the Graph to JSON Transformation Layer to transform and pass the data on to the REST API Layer.
[034] Fig 6 depicts a graphical model of the relationships that exist among the AMS entities. A Book 601 is a node and has some attributes like [Author, ISBN (International Book Number), Edition, Publication] and contains (or is associated with) different number of Assignments 602. An assignment 602 is an aggregation of questions 603 catering to one learning objective 604. An Assignment node 602 has attributes like [Id, Title, Description, Start Date, End Date, Author_ID]. A question 603 is situated at the atomic level and is tagged to a Learning Objective 604 and the question 603 checks students’ understanding of one particular learning objective 604 or multiple learning objectives. The attributes of question 603 are [Question Title, Question Weight, Question Source System, Question_ID]. A question 603 can be tagged to multiple learning objectives 604.
[035] Fig 7 depicts a representative logical diagram for the affinity analytics on the nodes of the graph representing learning components comprising of books, assignments, questions and learning objectives. In this embodiment a representative set is depicted below where the below mentioned relationships are represented in the Graph Model, where a Book [0072363711] 601 is considered.
Node [ Attribute] Relationship Node
Book [ 0072363711] :[has] {[a10],[a11],[a12],[a13],a[14]}
Assignment [a10] :[contains] {[q20]}
Assignment [a11] :[contains] {[q20],[q21]}
Assignment [a12] :[contains] {[q21],[q22],[q23]}
Assignment [a13] :[contains] {[q23],[q24]}
Assignment [a14] :[contains] {[q22],[q23],[q24]}
Learning Objective [l31] :[serves] {[q20],[q21],[q23]}
Learning Objective [l32] :[serves] {[q22]}
Learning Objective [l33] :[serves] {[q21],[q23],[q24]}

The below representation is an embodiment of query in Graph Query Language performing the logic of determining Question associations. This query determines for each Question (q’) 603 what other questions (q) are associated in Assignments 602 where the particular Question (q’) 603 is associated:
Match [q’] ?:[contains] – [Assignment] --:[contains]?[q] where q != q’
return q;
Entire graph has to be traversed for each Question node to generate such insights and a new relationship gets created between two such questions. Fig 7 shows that by example.
[036] Once the above mentioned relationship gets created between each question, the above embodiment of a query then provides the affinity of the questions 603. Now if a question 603 appears in more than one Assignment 602, then affinity weight gets increased by the order of 1. In such a manner, when the entire graph has been traversed and relationship gets updated, the graph represents a connected “Affine graph” which has the data for recommendation, aggregation and other different type of data. The below table shows the “Affine graph” for the above representative data:
Question Containing Assignments Affined Questions Weight
[q20] {[a10],[a11} [q21] 1
[q21] {[a11],[a12]} [q20],[q22],[q23] 1,1,2
[q22] {[a12],[a14]} [q21],[q23],[q24] 1,1,1
[q23] {[a12],[a13],[a14]} [q21],[q22],[q24] 1,2,2
[q24] {[a13],[a14]} [q23] 2

[037] If the learning nodes were to be maintained in a traditional Entity Relationship model, the nodes would be as depicted in Fig 8. Here the relationship between Assignments 804 to Question 802 would be a many-to-many relationship as an Assignment can contain multiple Questions and a Question can be associated with multiple Assignments. This would then be implemented by an intermediate table 801 illustrated here by the name “Assignment_Question”. This model has been used by traditional AMS/LMS systems.
Now in order to run such Affinity Analytics on this ER model, using the earlier embodiment, the below query needs to be executed:

Typically in any AMS/LMS’s production system, the run time complexity of the above query would be huge and any real time analytics cannot be run. Hence in order to do the Affinity Analytics of the above data, in an effective and yet an efficient way, the strategy of converting the data in traditional Entity-Relationship format into a Graph DB, as depicted in Fig 2, was taken.
Below statistics gives the performance benefits of Graph over the traditional ER model:
===============================================================
• Query response time = Function of (graph density, graph size, query degree)
• Relational Database (RDBMS): Exponential slowdown as each factor increases
• Graph DB (like Neo4j):
a) performance remains constant as graph size increases,
b) performance slowdown is linear or better as density & degree increase.
• Here is an illustration of Graph DB like Neo4j performance with respect to an RDBMS like MySQL
o Sample social graph with around 1000 persons
o Average 50 friends per person
o Path coverage limited to depth 4
Database # of persons Query time
MySQL 1000 2000 ms
Neo4j 1000 2 ms
Neo4j 1,000,000 2 ms
===============================================================
Graph Database is fundamentally very fast in executing “Arbitrary Path Query”. For example, with reference to Fig 7 and the nodes depicted within, consider a problem at hand which is to find out that which Questions 602 are associated with one Question [q20] for the below mentioned record set.
Question Affined Questions
[q20] [q21]
[q21] [q20],[q22],[q23]
[q22] [q21],[q23],[q24]
[q23] [q21],[q22],[q24]
[q24] [q23]
For RDBMS to scan this, the representation of the above data set needs to be like below:
Question Affined Questions
[q20] [q21]
[q21] [q20]
[q21] [q22]
[q21] [q23]
[q22] [q21]
[q22] [q23]
[q22] [q24]
[q23] [q21]
[q23] [q22]
[q23] [q24]
[q24] [q23]
Now the RDBMS has to scan the full table multiple time for each question whether the data has any affined question for [q20]. So as the number of records grow, the scanning time also increases and resulting into exponential growth of the response time.
On the other hand, graph traversal always knows beforehand the walks which started or include node [q20] and hence the traversal does not have to search the entire graph but will just traverse the walks. Hence even if the number of peripheral nodes increases in the graph, the performance almost remains linear. So solving such category of problems, a Graph Database will always be helpful.
[038] In one embodiment of the exchange of data across multiple AMS/LMS systems, extension of a Caliper framework to exchange Affinity Analytics data may be used. Currently the Caliper framework guides the exchange of only learning analytics data. However as Affinity Analytics data is a real new world of information, so there is a need to exchange this information as well within the Caliper framework.
[039] Online learning environments and other learner activity data have traditionally been maintained and strengthened in silos thereby making it impossible for learners and educator alike to view or understand the wider picture and to get a holistic view of what is happening in the learning and teaching environment. This gave rise to the need to standardize and consolidate data for a single view and cross provider analytics.
[040] In one embodiment for the above need for standardization, Learning Tools Interoperability (LTI) and Question and Test Interoperability (QTI) specifications enables connection of learning systems with external service tools in a standard framework and supports the exchange of item, test and results data between authoring and delivery systems in a learning management network of systems.
[041] In the same embodiment as above the Caliper Framework helps establish a means to consistently capture and present measures of learning activity, define a common language for labelling learning data and provide a standard way of measuring learning activities and effectiveness.
[042] In one embodiment considering the above mentioned LTI, QTI and Caliper Framework, a standard QTI Package Schema is depicted in Fig 9.
In the proposed embodiment the QTI schema will be as depicted in Fig 10 and illustrates how the Affinity Data would be housed inside the schema.
[043] In this proposed embodiment, a collection of affined assessment items embodied here as “assessmentItem” collection 908 will be tagged to each of the assessment item references “assessmentItemRef” 907. The AffinedassignmentItem collection 1001 would have a pointer back to the original assessmentItem 908 so that data does not have to be duplicated. The “weight” and any additional attribute can be accommodated in the AffinedassignmentItem 1001 data structure. This would then help an existing Caliper framework to transport the affinity analytics data within the same QTI package.

Documents

Orders

Section Controller Decision Date

Application Documents

# Name Date
1 201621034707-IntimationOfGrant24-04-2024.pdf 2024-04-24
1 Form 3 [10-10-2016(online)].pdf 2016-10-10
2 201621034707-PatentCertificate24-04-2024.pdf 2024-04-24
2 Form 20 [10-10-2016(online)].jpg 2016-10-10
3 Form 18 [10-10-2016(online)].pdf_53.pdf 2016-10-10
3 201621034707-PETITION UNDER RULE 137 [19-04-2024(online)].pdf 2024-04-19
4 Form 18 [10-10-2016(online)].pdf 2016-10-10
4 201621034707-RELEVANT DOCUMENTS [19-04-2024(online)].pdf 2024-04-19
5 Drawing [10-10-2016(online)].pdf 2016-10-10
5 201621034707-Written submissions and relevant documents [23-02-2024(online)].pdf 2024-02-23
6 Description(Complete) [10-10-2016(online)].pdf 2016-10-10
6 201621034707-Correspondence to notify the Controller [07-02-2024(online)].pdf 2024-02-07
7 Other Patent Document [02-11-2016(online)].pdf 2016-11-02
7 201621034707-FORM-26 [07-02-2024(online)]-1.pdf 2024-02-07
8 Form 26 [18-11-2016(online)].pdf 2016-11-18
8 201621034707-FORM-26 [07-02-2024(online)].pdf 2024-02-07
9 201621034707-US(14)-HearingNotice-(HearingDate-09-02-2024).pdf 2024-01-23
9 REQUEST FOR CERTIFIED COPY [06-02-2017(online)].pdf 2017-02-06
10 201621034707-CLAIMS [26-11-2020(online)].pdf 2020-11-26
10 Form 3 [08-03-2017(online)].pdf 2017-03-08
11 201621034707-COMPLETE SPECIFICATION [26-11-2020(online)].pdf 2020-11-26
11 Request For Certified Copy-Online.pdf 2018-08-11
12 201621034707-FER_SER_REPLY [26-11-2020(online)].pdf 2020-11-26
12 abstract1.jpg 2018-08-11
13 201621034707-OTHERS [26-11-2020(online)].pdf 2020-11-26
13 201621034707-Power of Attorney-231116.pdf 2018-08-11
14 201621034707-FER.pdf 2020-05-26
14 201621034707-Form 1-071116.pdf 2018-08-11
15 201621034707-CORRESPONDENCE(IPO)-(CERTIFIED)-(14-2-2017).pdf 2018-08-11
15 201621034707-Correspondence-231116.pdf 2018-08-11
16 201621034707-Correspondence-071116.pdf 2018-08-11
17 201621034707-Correspondence-231116.pdf 2018-08-11
17 201621034707-CORRESPONDENCE(IPO)-(CERTIFIED)-(14-2-2017).pdf 2018-08-11
18 201621034707-Form 1-071116.pdf 2018-08-11
18 201621034707-FER.pdf 2020-05-26
19 201621034707-OTHERS [26-11-2020(online)].pdf 2020-11-26
19 201621034707-Power of Attorney-231116.pdf 2018-08-11
20 201621034707-FER_SER_REPLY [26-11-2020(online)].pdf 2020-11-26
20 abstract1.jpg 2018-08-11
21 201621034707-COMPLETE SPECIFICATION [26-11-2020(online)].pdf 2020-11-26
21 Request For Certified Copy-Online.pdf 2018-08-11
22 201621034707-CLAIMS [26-11-2020(online)].pdf 2020-11-26
22 Form 3 [08-03-2017(online)].pdf 2017-03-08
23 201621034707-US(14)-HearingNotice-(HearingDate-09-02-2024).pdf 2024-01-23
23 REQUEST FOR CERTIFIED COPY [06-02-2017(online)].pdf 2017-02-06
24 Form 26 [18-11-2016(online)].pdf 2016-11-18
24 201621034707-FORM-26 [07-02-2024(online)].pdf 2024-02-07
25 Other Patent Document [02-11-2016(online)].pdf 2016-11-02
25 201621034707-FORM-26 [07-02-2024(online)]-1.pdf 2024-02-07
26 Description(Complete) [10-10-2016(online)].pdf 2016-10-10
26 201621034707-Correspondence to notify the Controller [07-02-2024(online)].pdf 2024-02-07
27 Drawing [10-10-2016(online)].pdf 2016-10-10
27 201621034707-Written submissions and relevant documents [23-02-2024(online)].pdf 2024-02-23
28 Form 18 [10-10-2016(online)].pdf 2016-10-10
28 201621034707-RELEVANT DOCUMENTS [19-04-2024(online)].pdf 2024-04-19
29 Form 18 [10-10-2016(online)].pdf_53.pdf 2016-10-10
29 201621034707-PETITION UNDER RULE 137 [19-04-2024(online)].pdf 2024-04-19
30 Form 20 [10-10-2016(online)].jpg 2016-10-10
30 201621034707-PatentCertificate24-04-2024.pdf 2024-04-24
31 201621034707-IntimationOfGrant24-04-2024.pdf 2024-04-24
31 Form 3 [10-10-2016(online)].pdf 2016-10-10

Search Strategy

1 searchE_15-05-2020.pdf

ERegister / Renewals

3rd: 23 Jul 2024

From 10/10/2018 - To 10/10/2019

4th: 23 Jul 2024

From 10/10/2019 - To 10/10/2020

5th: 23 Jul 2024

From 10/10/2020 - To 10/10/2021

6th: 23 Jul 2024

From 10/10/2021 - To 10/10/2022

7th: 23 Jul 2024

From 10/10/2022 - To 10/10/2023

8th: 23 Jul 2024

From 10/10/2023 - To 10/10/2024

9th: 10 Oct 2024

From 10/10/2024 - To 10/10/2025

10th: 01 Oct 2025

From 10/10/2025 - To 10/10/2026