Abstract: Training for Artificial Intelligence (AI) and Machine Learning (ML) skills is fundamentally different from training for programming tasks. State of the art systems evaluating programmer’s skill and capabilities for Machine Learning (ML) have challenges while handling AI capability evaluation with scalability. Embodiments herein provide method and system for training and evaluation of artificial intelligence (ai) capabilities. The system provides a scalable approach to train and evaluate ML and AI capabilities by offering an interactive and hands-on read–eval–print loop (REPL) based learning and assessment environment for assessing multiple simultaneous learners and offering several powerful capabilities for instructors with online offline mode for trainees adding scalability of handling large number of trainees. [To be published with 3A]
Description:FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See Section 10 and Rule 13)
Title of invention:
METHOD AND SYSTEM FOR TRAINING AND EVALUATION OF ARTIFICIAL INTELLIGENCE (AI) AND MACHINE LEARNING (ML) CAPABILITIES
Applicant
Tata Consultancy Services Limited
A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th floor,
Nariman point, Mumbai 400021,
Maharashtra, India
Preamble to the description:
The following specification particularly describes the invention and the manner in which it is to be performed.
TECHNICAL FIELD
[001] The embodiments herein generally relate to the field of Artificial Intelligence (AI) and, more particularly, to a method and system for training and evaluation of Artificial Intelligence (AI) and machine learning (ML) capabilities of trainees.
BACKGROUND
[002] Artificial Intelligence (AI) is a key aspect in organizations of the day, contributing to enhancing product as well deciding business strategies. To keep pace with newer breeds of AI technologies for enabling growth and transformation for its customers, an organization needs to constantly re-skill its employees with AI skills and capabilities. In the remote world, offering novice developers and cross skilling experienced associates in a way that supports interactive, step-by-step, and self-paced learning in the absence of informal, peer-to-peer learning that happens across the cubicles is a big challenge. Identifying and continuously engaging talent with several hands-on Machine Learning (ML) or AI challenges and keeping pace with the changing field of AI is a challenge. Furthermore, addressing these requirements of an organization at scale is an added challenge.
[003] There are a number of programming assessment platforms available in the market. Most of existing tools are meant for coding skills and have assessments that are test case based. However, ML/AI requires different types of assessments to be supported, not necessarily based on test cases that are the norm but based on datasets. ML/AI assessments are also different from usual programming assessments because of need for different environments and infrastructure provisioning based on the ML/AI task at hand. For example, requires supporting open source libraries such as TensorFlow™ or SCIKITLEARN™ Or NLTK™ or may require different memory and compute resources depending on task and dataset size.
[004] Thus, systems for evaluating the technical skills and capabilities of users in AI/ML at scale requires a different technical approach as against normal programming skills testing approaches since Machine learning and AI encompasses not just programming but other skills too. When data science or machine learning tasks are to be dealt with, the evaluating system is required to have flexibility, scale over getting the evaluations of the users based on machine learning metrics and data types, rather than just comparing for equality of primitive values.
SUMMARY
[005] Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems.
[006] For example, in one embodiment, a method for training and evaluation of Artificial Intelligence (AI) and machine learning (ML) capabilities of trainees.is provided. The method includes providing, an authenticated access to an instructor, wherein the instructor defines AI or ML training and assessment approach for a plurality of trainees. Further, the method includes enabling the instructor to uniquely identify a type of a task to be assigned to the plurality of trainees, wherein the type of task comprises one of a dataset task, a programming task, and a combinational task thereof. Further method includes identifying in accordance with the type of the task, a dataset for each trainee, from among a plurality of datasets from a centrally managed repository. A dataset distribution approach is managed centrally and enables one of: (a) each trainee receiving a unique dataset from among the plurality of datasets, (b) each group of trainees from among a plurality of groups generated from the plurality of trainees receiving a unique subset of dataset among the plurality of datasets, and (c) each of the plurality of trainees receiving same dataset. Furthermore, method includes defining a plurality of presets associated with the dataset using Automated Machine Learning (AutoML) techniques, the plurality of presets comprising: (a) splitting of the dataset into a training set shared with each trainee, a test set shared with each trainee, and a test solution set shared with the instructor comprising actual values for outputs in target columns, against which output predictions of each trainee are evaluated for a ML problem to be performed for the type of the task identified for each trainee; (b) defining independent and dependent features of the dataset to be used by the trainee for the ML problem; (c) determining evaluation metrics to be applied for output predictions of the ML problem executed by each trainee; and (d) defining a scoring pattern for the ML problem against the evaluation metrics identified. Furthermore, method includes generating, an instructor-side evaluation notebook in accordance with a task design planned by the instructor for the task for each trainee, wherein the task design splits the task into a plurality of subtasks and specifies a plurality of values corresponding to the defined plurality of presets. Further, method includes generating, an evaluation trainee notebook for each trainee, defining a framework for executing the plurality of subtasks of the task specified by the task design. Further, method includes distributing, to each trainee, the training set and the test set obtained from the dataset in accordance with the plurality of presets. Furthermore, method includes publishing, the task in accordance with the type identified for each trainee by mapping the task or corresponding plurality of subtasks to one or more nodes in a concept inventory comprising a hierarchical node structure representing ML concepts, wherein inclination and learning of each trainee is analyzed based on the task or the plurality of subtasks completed by each trainee and corresponding mapping to the ML concepts. Furthermore, method includes dynamically provision, a plurality of parameters based on execution environment of each trainee, and wherein the execution environment comprises an offline mode and an online mode to be opted by each trainee.
[007] In another aspect, a system for training and evaluation of Artificial Intelligence (AI) and machine learning (ML) capabilities of trainees is provided. The system comprises a memory storing instructions; one or more Input/Output (I/O) interfaces; and one or more hardware processors coupled to the memory via the one or more I/O interfaces, wherein the one or more hardware processors are configured by the instructions to provide, an authenticated access to an instructor, wherein the instructor defines AI or ML training and assessment approach for a plurality of trainees. Further, the one or more hardware processors enable the instructor to uniquely identify a type of a task to be assigned to the plurality of trainees, wherein the type of task comprises one of a dataset task, a programming task, and a combinational task thereof. Further the one or more hardware processors identify in accordance with the type of the task, a dataset for each trainee, from among a plurality of datasets from a centrally managed repository. A dataset distribution approach is managed centrally and enables one of: (a) each trainee receiving a unique dataset from among the plurality of datasets, (b) each group of trainees from among a plurality of groups generated from the plurality of trainees receiving a unique subset of dataset among the plurality of datasets, and (c) each of the plurality of trainees receiving same dataset. Furthermore, the one or more hardware processors define a plurality of presets associated with the dataset using Automated Machine Learning (AutoML) techniques, the plurality of presets comprising: (a) splitting of the dataset into a training set shared with each trainee, a test set shared with each trainee, and a test solution set shared with the instructor comprising actual values for outputs in target columns, against which output predictions of each trainee are evaluated for a ML problem to be performed for the type of the task identified for each trainee; (b) defining independent and dependent features of the dataset to be used by the trainee for the ML problem; (c) determining evaluation metrics to be applied for output predictions of the ML problem executed by each trainee; and (d) defining a scoring pattern for the ML problem against the evaluation metrics identified. Furthermore, the one or more hardware processors generate, an instructor-side evaluation notebook in accordance with a task design planned by the instructor for the task for each trainee, wherein the task design splits the task into a plurality of subtasks and specifies a plurality of values corresponding to the defined plurality of presets. Further, the one or more hardware processors generate, an evaluation trainee notebook for each trainee, defining a framework for executing the plurality of subtasks of the task specified by the task design. Further, the one or more hardware processors distribute, to each trainee, the training set and the test set obtained from the dataset in accordance with the plurality of presets. Furthermore, the one or more hardware processors publish, the task in accordance with the type identified for each trainee by mapping the task or corresponding plurality of subtasks to one or more nodes in a concept inventory comprising a hierarchical node structure representing ML concepts, wherein inclination and learning of each trainee is analyzed based on the task or the plurality of subtasks completed by each trainee and corresponding mapping to the ML concepts. Furthermore, the one or more hardware processors dynamically provision, a plurality of parameters based on execution environment of each trainee, and wherein the execution environment comprises an offline mode and an online mode to be opted by each trainee.
[008] In yet another aspect, there are provided one or more non-transitory machine-readable information storage mediums comprising one or more instructions, which when executed by one or more hardware processors causes a method for training and evaluation of Artificial Intelligence (AI) and machine learning (ML) capabilities of trainees. The method includes providing, an authenticated access to an instructor, wherein the instructor defines AI or ML training and assessment approach for a plurality of trainees. Further, the method includes enabling the instructor to uniquely identify a type of a task to be assigned to the plurality of trainees, wherein the type of task comprises one of a dataset task, a programming task, and a combinational task thereof. Further method includes identifying in accordance with the type of the task, a dataset for each trainee, from among a plurality of datasets from a centrally managed repository. A dataset distribution approach is managed centrally and enables one of: (a) each trainee receiving a unique dataset from among the plurality of datasets, (b) each group of trainees from among a plurality of groups generated from the plurality of trainees receiving a unique subset of dataset among the plurality of datasets, and (c) each of the plurality of trainees receiving same dataset. Furthermore, method includes defining a plurality of presets associated with the dataset using Automated Machine Learning (AutoML) techniques, the plurality of presets comprising: (a) splitting of the dataset into a training set shared with each trainee, a test set shared with each trainee, and a test solution set shared with the instructor comprising actual values for outputs in target columns, against which output predictions of each trainee are evaluated for a ML problem to be performed for the type of the task identified for each trainee; (b) defining independent and dependent features of the dataset to be used by the trainee for the ML problem; (c) determining evaluation metrics to be applied for output predictions of the ML problem executed by each trainee; and (d) defining a scoring pattern for the ML problem against the evaluation metrics identified. Furthermore, method includes generating, an instructor-side evaluation notebook in accordance with a task design planned by the instructor for the task for each trainee, wherein the task design splits the task into a plurality of subtasks and specifies a plurality of values corresponding to the defined plurality of presets. Further, method includes generating, an evaluation trainee notebook for each trainee, defining a framework for executing the plurality of subtasks of the task specified by the task design. Further, method includes distributing, to each trainee, the training set and the test set obtained from the dataset in accordance with the plurality of presets. Furthermore, method includes publishing, the task in accordance with the type identified for each trainee by mapping the task or corresponding plurality of subtasks to one or more nodes in a concept inventory comprising a hierarchical node structure representing ML concepts, wherein inclination and learning of each trainee is analyzed based on the task or the plurality of subtasks completed by each trainee and corresponding mapping to the ML concepts. Furthermore, method includes dynamically provisioning, a plurality of parameters based on execution environment of each trainee, and wherein the execution environment comprises an offline mode and an online mode to be opted by each trainee.
[009] It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS
[0010] The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles:
[0011] FIG. 1 is a functional block diagram of a system for training and evaluation of Artificial Intelligence (AI) and machine learning (ML) capabilities of trainees, in accordance with some embodiments of the present disclosure.
[0012] FIGS. 2A through 2B (collectively referred as FIG. 2) is a flow diagram illustrating a method for training and evaluation of Artificial Intelligence (AI) and machine learning (ML) capabilities of trainees, using the system of FIG. 1, in accordance with some embodiments of the present disclosure.
[0013] FIG. 3A depicts a functional flow of the system 100, in accordance with some embodiments of the present disclosure.
[0014] FIG. 3B illustrates a role-based process flow of system of FIG. 1 for user creation, segregated into various groups comprising admin, instructor, trainee, in accordance with some embodiments of the present disclosure.
[0015] FIG. 4A depicts process steps and FIG. 4B depicts an architectural overview, for an online mode and an offline mode of evaluation that can be opted by the trainees by via the system of FIG. 1, in accordance with some embodiments of the present disclosure.
[0016] FIGS. 5A and 5B depict dynamic provisioning of parameters based on execution environment comprising the offline mode and the online mode respectively, in accordance with some embodiments of the present disclosure.
[0017] FIG. 6 is a concept inventory used by system of FIG. 1, in accordance with some embodiments of the present disclosure.
[0018] It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative systems and devices embodying the principles of the present subject matter. Similarly, it will be appreciated that any flow charts, flow diagrams, and the like represent various processes which may be substantially represented in computer readable medium and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.
DETAILED DESCRIPTION OF EMBODIMENTS
[0019] Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appear. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the scope of the disclosed embodiments.
[0020] Training for Artificial Intelligence (AI) and Machine Learning (ML) skills is fundamentally different from training for programming tasks. State of the art systems evaluating programmer’s skill and capabilities for Machine Learning (ML) have challenges while handling AI capability evaluation with scalability. Embodiments herein provide method and system for training and evaluation of AI and ML capabilities of a trainee or learner. The system provides a scalable approach to train and evaluate ML and AI capabilities by offering an interactive and hands-on read–eval–print loop (REPL) based learning and assessment environment for assessing multiple simultaneous learners and offering several powerful capabilities for instructors with online offline mode for trainees adding scalability of handling large number of trainees. Referring now to the drawings, and more particularly to FIGS. 1 through 6, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method.
[0021] FIG. 1 is a functional block diagram of a system 100 for training and evaluation of Artificial Intelligence (AI) and machine learning (ML) capabilities of trainees, in accordance with some embodiments of the present disclosure.
[0022] A functional flow and a role based process flow of the system 100 is further detailed in FIG. 3A and 3B respectively. In an embodiment, the system 100 of FIG. 1 comprises an application server connected to end users such as trainees via thin clients. The application server comprises one or more hardware processor(s) 104, communication interface device(s), alternatively referred as input/output (I/O) interface(s) 106, and one or more data storage devices or a memory 102 operatively coupled to the processor(s) 104. The memory may include a database 108 and a storage 112, also depicted in functional flow of the system 100 as in FIG. 3A. The system 100 with one or more hardware processors is configured to execute functions of one or more functional blocks of the system 100 depicted in FIG. 1 and FIG. 3A
[0023] Referring to the system 100, the components of system 100, in an embodiment, include the processor(s) 104, or one or more hardware processors 104. In an embodiment, the one or more hardware processors 104 can be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the one or more hardware processors 104 are configured to fetch and execute computer-readable instructions stored in the memory 102. In an embodiment, the system 100 can be implemented in a variety of computing systems including laptop computers, notebooks, hand-held devices such as mobile phones, workstations, servers, and the like.
[0024] The I/O interface(s) 106 can include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface to display the generated target images and the like and can facilitate multiple communications within a wide variety of networks N/W and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular and the like. In an embodiment, the I/O interface (s) 106 can include one or more ports for connecting to a number of external devices or to another server or devices.
[0025] The memory 102 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes.
[0026] In an embodiment, the memory 102 includes a plurality of modules 110 (not shown). The plurality of modules 110 include programs or coded instructions that supplement applications or functions performed by the system 100 for executing different steps involved in the process of training and evaluation of Artificial Intelligence (AI) capabilities of trainees, being performed by the system 100. The plurality of modules 110, amongst other things, can include routines, programs, objects, components, and data structures, which performs particular tasks or implement particular abstract data types. The plurality of modules 110 may also be used as, signal processor(s), node machine(s), logic circuitries, and/or any other device or component that manipulates signals based on operational instructions. Further, the plurality of modules 110 can be used by hardware, by computer-readable instructions executed by the one or more hardware processors 104, or by a combination thereof.
[0027] Further, the memory 102 may comprise information pertaining to input(s)/output(s) of each step performed by the processor(s) 104 of the system100 and methods of the present disclosure.
[0028] The database 108 stores task created by the instructor for the trainees with task details, task meta info, concepts tagged to the task, author of task. Database has trainee credential info, intermediate and final trainee scores, information authenticate trainee, trainee ID and so on. Further, the database (or repository) 108 may include a plurality of abstracted piece of code for refinement and data that is processed, received, or generated as a result of the execution of the plurality of modules in the module(s) 110.
[0029] Although the data base 108 is shown internal to the system 100, it will be noted that, in alternate embodiments, the database 108 can also be implemented external to the system 100 as depicted in FIG. 2A, and communicatively coupled to the system 100. The data contained within such external database may be periodically updated. For example, new data may be added into the database (not shown in FIG. 1) and/or existing data may be modified and/or non-useful data may be deleted from the database. In one example, the data may be stored in an external system, such as a Lightweight Directory Access Protocol (LDAP) directory and a Relational Database Management System (RDBMS). In addition to the database 108, the system includes a storage 112 that stores a plurality of datasets, instructor side and trainee side evaluation notebooks, output predictions and the like for both online mode and offline mode. For example, the storage includes cloud storage, or any storage accessible on the application server. Functions of the components of the system 100 are now explained with reference FIG 2A through FIG. 6.
[0030] FIGS. 2A through 2B (collectively referred as FIG. 2) is a flow diagram illustrating a method 200 for training and evaluation of Artificial Intelligence (AI) and machine learning (ML) capabilities of trainees, using the system of FIG. 1, in accordance with some embodiments of the present disclosure.
[0031] In an embodiment, the system 100 comprises one or more data storage devices or the memory 102 operatively coupled to the processor(s) 104 and is configured to store instructions for execution of steps of the method 200 by the processor(s) or one or more hardware processors 104. The steps of the method 200 of the present disclosure will now be explained with reference to the components or blocks of the system 100 as depicted in FIG. 1, the steps of flow diagram as depicted in FIGS. 2A and 2B and FIGS. 3A through FIG. 6. Although process steps, method steps, techniques or the like may be described in a sequential order, such processes, methods, and techniques may be configured to work in alternate orders. In other words, any sequence or order of steps that may be described does not necessarily indicate a requirement that the steps to be performed in that order. The steps of processes described herein may be performed in any order practical. Further, some steps may be performed simultaneously.
[0032] As later explained in conjunction with FIG. 3A and 3B, the system 100 provides 3 types of roles to the users of the system 100. The roles include an admin, an instructor and a plurality of trainees to get trained and evaluated for AI skills.
[0033] Referring to the steps of the method 200, at step 202 of the method 200, the one or more hardware processors 104 provide an authenticated access to an instructor. The instructor defines the AI training and assessment approach for a plurality of trainees.
[0034] The training and assessment, also referred to as learning path can be described as a way for authors or instructors to sequence and group specific hands on learning content along with other prerequisite learning into a logical course or specialization so that learners can complete a full course or specialization. Support for several AI assisted coding bots, several AI learning tutors can be enabled via the system 100 to help learners or trainees with code completion, English to code, comment to code, during initial stages acting as a learning tutor in remote mode
[0035] At step 204 of the method 200, the one or more hardware processors 104 enable the instructor to uniquely identify a type of a task to be assigned to each trainee among the plurality of trainees, wherein the type of task comprises one of a dataset task, a programming task, and a combinational task thereof.
[0036] At step 206 of the method 200, the one or more hardware processors 104 identify in accordance with the type of the task, a dataset for each trainee among a plurality of datasets from a centrally managed repository, wherein a dataset distribution approach is managed centrally and enables one of:
1. Each trainee receiving a unique dataset from among the plurality of datasets.
2. Each group of trainees from among a plurality of groups generated from the plurality of trainees receiving a unique subset of dataset among the plurality of datasets.
3. Each of the plurality of trainees receiving same dataset.
[0037] The data distribution is explained in conjunction with step 214.
[0038] At step 208 of the method 200, the one or more hardware processors 104 define a plurality of presets associated with the dataset using Automated Machine Learning (AutoML) techniques known in the art. The plurality of presets comprising:
a) Splitting of the dataset into a training set shared with each trainee, a test set shared with each trainee, and a test solution set shared with the instructor comprising actual values for outputs in target columns, against which output predictions of each trainee are evaluated for a ML problem to be performed for the type of the task identified for each trainee.
b) Defining independent and dependent features of the dataset to be used by the trainee for the ML problem.
c) Determining evaluation metrics to be applied for output predictions of the ML problem executed by each trainee.
d) Defining a scoring pattern for the ML problem against the evaluation metrics identified.
[0039] At step 210 of the method 200, the one or more hardware processors 104 generate an instructor-side evaluation notebook in accordance with a task design planned by the instructor for the task for each trainee, wherein the task design splits the task into a plurality of subtasks and specifies a plurality of values corresponding to the defined plurality of presets. Jupyter notebook™ is one implementation of instructor-side evaluation notebook, which provides an interactive environment where code can be run in different cells. Jupyter notebook™, offers a lot of flexibility to a learner and interactive way of learning programming and seeing output immediately. Customization, if desired by the instructor, is enabled to be applied over the plurality of values corresponding to the plurality of presets during the task design. For example, customization can be applied for the dataset to be generated synthetically or chosen from existing repository. Similarly feature to be predicted, train-test split, the way the dataset is to be distributed to trainees, the benchmark metrics to be evaluated for the model built by a trainee or for their predictions, the output-input pair chosen for given evaluation of sub-step can be customized.
[0040] In the existing systems, the instructor side evaluation notebook is usually hand-coded manually. The instructor has to design the task to be evaluated, break it down into sub-steps (sub tasks) with preset values based on commonly used or recently used task creation (for example, data selection, data split into training and test, intermediate processing steps or data analysis steps, AI model creation step and evaluation of the AI model step). Further, the tasks are written step by step as markdowns. The instructor may also choose to include partially written code or code outline with comments, and to include the evaluation code to evaluate the code returned by the trainees for each of those sub steps they wish to evaluate. However, the system 100 automatically, without manual intervention generates the evaluation notebooks on instructor and trainee side.
[0041] Machine learning and AI encompasses not just programming but other skills too. While dealing with data science or machine learning tasks, it is required to have flexibility, scale over getting the evaluations of the users based on machine learning metrics and data types rather than just comparing for equality of primitive values. Most programming-based evaluation platforms only evaluate syntactic correctness or correctness against a reference output. However, the system 100 supports a method for automatic assessments that are flexible to work across AI/ML task types. For example, the system 100 can supports innumerable types of ML/AI performance and benchmark metrics such as accuracy, mean squared error, jacquard score, F1 score, or any evaluation metric that are defined once, and can be reused across evaluations.Conventional approaches require the instructor or author to read input file, match against test file, check for inconsistencies in the column names, compare the columns to be matched based on the metric (e.g. Accuracy), then write the code for how to implement the evaluation – in this case by comparing the values of target column with the test file column and calculating the percentage of matches out of total as a ‘%accuracy score’. This entire logic is hand coded in existing approaches.
[0042] As mentioned Jupyter notebook™ is one implementation of instructor-side evaluation notebook, which provides an interactive environment where code can be run in different cells. Jupyter notebook™, offers a lot of flexibility to a learner and interactive way of learning programming and seeing output immediately. However, since Jupyter notebook™ is cell by cell evaluation, there are many to-and-fros to be evaluated as opposed to other platforms where evaluation is done one time at the end while submitting the entire program, harder to evaluate in real time and at a scale. However, the system 100 automates the generation of evaluation notebook and is explained with use case example below:
Use case 1: Instructor flow
1. Instructor logs into the application > navigates to create section > fills a form for Task name and description.
2. Redirected to the task detail page > Edits basic details (concepts, time, group) related to the task.
3. For using advanced options to create an ML task automatically, instructor goes to advanced editing section > select datasets to map as train or test.
4. Chooses from the preset or controls the size of the data sample given to the learner > Controls different/multiple samples of data points for different learners or trainees.
5. Chooses from the preset or for given dataset as a test set, instructor selects a target column from the dataset to test the learner on this column/feature > Chooses from the preset or selects a metric for evaluating the learner’s response > Checks the option for Auto-create admin Notebook for downloading the automatically created notebook with all parameters set > The task which can be automatically evaluated at scale is created automatically, and Learner can start downloading learner notebooks to work upon.
[0043] Thus, as explained above, at step 212 of the method 200, the one or more hardware processors 104 generate an evaluation trainee notebook for each trainee which defines a framework for executing the plurality of subtasks of the task specified by the task design. The evaluation trainee notebook includes the overall framework from the task breakdown given as a notebook with step-by-step tasks to be completed and sometimes even partially complete code that can be embedded within cells that need to be coded and completed by the trainee to get the desired result. This code is then evaluated by the instructor side notebook.
[0044] At step 214 of the method 200, the one or more hardware processors 104 distribute to each trainee, the training set and the test set obtained from the dataset in accordance with the plurality of preset. Conventionally, for every ML problem there are a total of three dataset files. These are uploaded as two different files sent to learners: train set, test set. A third file called a test solution set is kept by the instructor, interchangeably referred to as author, for evaluation, which contains the actual values for the target column against which the learner’s predictions will be evaluated. Finally, an evaluation script program is manually coded to evaluate the test set against the training set based on a given measure.
[0045] In conventional approaches, the dataset is copied and referenced to the user side. User (trainee) gets a workspace on the cloud separate from the original dataset. However, the system 100 references only one single dataset all through the lifecycle for task creation (training, test and creating a solution set), distribution and evaluation. Thus, even when a user is given a separate dataset uniquely, each time only the original dataset is referenced. Even while comparing the solutions with the reference dataset or evaluating a metric, the system 100 does not store any additional copy of the dataset separately for the user. In the system 100 disclosed herein, a centrally managed and growing repository of datasets for learning with N pre-existing datasets can be used for learning. New datasets can also be added too. Each dataset may have m features and n data points in it as depicted in Table 1 below.
Table1
Feature 1 Feature 2 … Feature m
Datapoint 1 .. .. .. ..
… .. .. .. ..
Datapoint n .. .. .. ..
[0046] If there are U users, the system 100 enables distributing the same dataset in different ways to U users. Any subset u (where u ? U) of users U can get a unique dataset from the database 108 having repository of N datasets ( thus each user can get a unique dataset as well). Any subset u (where u ? U) of users U can get a unique data subset ni (where ni ? Ni) from a given dataset Ni (where Ni ? N) in our repository of datasets. Any number of datasets can be created for different groups. Each user can get a different data subset as well. Target variables for ML/AI predictions are chosen dynamically while referencing the original dataset. A test author can choose Feature m as a target variable (e.g., fraud or not) in one task and can use another feature m-1 (fraud amount) for another.
[0047] Train data set, test dataset, and test solution sets are split and managed dynamically based on requirements while referencing the original dataset. Author or the instructor can choose the percentage of train-test split, which the target feature to be predicted and how to distribute it to users (multisets/single set). Only while creating the downloadable file, is the data fetched dynamically from the original dataset based on the prescribed rules and shared with user. In the train set, the data points selected for the user along with features including the target feature are visible from reference dataset. In the test-set, when creating downloadable file, test set’s target feature’s datapoints are not sent. All this is managed centrally and dynamically. While evaluating, the test points are filtered out. The test set is received for evaluation, wherein the rows given to user are filtered out.
[0048] Datasets are managed dynamically from the original without creating any duplications or copies. This approach of dataset distribution or sampling disclosed herein is memory and space efficient. Time for evaluations is measured for each user or trainee and benchmarks are shared for a given image dataset of predefined size, which has been distributed uniquely to each user. Unlike existing systems, the datasets are managed in a centralized manner allowing creation of assessments rapidly, thus enabling rapid scaling of assessments and evaluate them at scale. So even in a week-long course, 10 hands-on assignments can be created to enable practice.
[0049] At step 216 of the method 200, the one or more hardware processors 104 publish the task in accordance with the type identified for each trainee by mapping the task or corresponding plurality of subtasks to one or more nodes in a concept inventory as depicted in FIG. 6. The concept inventory comprises a hierarchical node structure representing ML concepts, wherein inclination and learning of each trainee is analyzed based on the task or the plurality of subtasks completed by each trainee and corresponding mapping to the ML concepts. The concept inventory enables to measure scores and skills at a very granular concept level for users, enabling analysis of overall learner (trainee) progress and concept understanding. A subject matter expert in an area is allowed to create the concept inventory for a given subject area. The subject area is first input as a title node. The next level nodes to elaborate the subject area is detailed. For each node, further nodes are detailed. This process continues until a granular concept is reached forming a tree structure. Further, if any concept node at any level requires any other pre-requisite skill, that node is also added as a dependent parent node, thus creating a graph structure and relationship.
[0050] An example task published to the trainee is to train a ML model for Gross domestic product (GDP). The system 100, also referred as an AI playground, helps trainees with hands-on training to learn AI. In the GDP example below, the task is to predict the GDP using the given data. The trainee has to develop and train an ML/AI model to perform this prediction. The system 100 then evaluates and shares scores for prediction results/ output obtained by each trainee.
TASK as presented to the learner: GDP Prediction ( dataset type task)
Description: You are provided with a training and test set in a comma-separated values (csv) file as follows "gdp_train" and "gdp_test"
1. The "gdp_train" sheet, consists of a training dataset containing 200 datapoints
with the following features:
• ID of an administrative unit (say a District)
• Total Population of the given administrative unit
• Total Work Population of the given administrative unit
• Literate Population of the given administrative unit
• GDP of the given administrative unit for a given year in Rs. Crore.
2. The "gdp_test" sheet, consists of a test dataset of 72 data points with same
features as that of the train set except that the GDP values are missing.
Your task is to model a relationship between the GDP and other given features and use this to predict the GDP. You are expected to upload the missing values of GDP in the
test set (in the sheet gdp_test).
Evaluation: Your predicted GDPs will be evaluated against known values based on the RMSE (root mean squared error). In a conventional solution, a script to perform this evaluation is manually programmed by the author. The script will accept the submission file from the learner, compare it against a reference file with the predicted values, compute the defined metric (which is RMSE in this case) and then transform by computational means this metric to a score in a learner understandable format (e.g., Percentage). The script has to be written for each problem created so that it can be automatically evaluated.
[0051] At step 218 of the method 200, the one or more hardware processors 104 dynamically provision a plurality of parameters based on execution environment of each trainee, and wherein the execution environment comprises an offline mode and an online mode to be opted by each trainee.
[0052] The offline mode scales the number of trainees that are trained and evaluated by enabling each trainee to install the evaluation trainee notebook on a local system for evaluation of the exercise received via an application/package installed on the local system, wherein the application comprises the functions and protocols to enable authenticated/secure interaction with the application server for specific communication comprising testing authenticity of the evaluation trainee notebook for each trainee and the task, fetching questions and data for the task and saving it locally using auth key, sending the locally run cell-wise step-by-step programming output to the application server, fetching a score in accordance with the scoring pattern after evaluation is done on the application server, obtaining the scores on demand, and submitting and saving a final status of the evaluation trainee notebook with entire code and details to the application server.
[0053] The online mode comprises providing each trainee, with a workspace via a dedicated instance through a thin client with predefined resource availability per user, wherein an online environment is provisioned for each trainee with all dependencies and resources dedicated to a single trainee, and wherein packages and the evaluation notebook for each trainee to work is prior present in the workspace.
[0054] Dynamic provisioning of environment, including libraries, GPUs, memory, storage for users in online or offline mode. This entire setup can operate on a Virtual Machines (VMs), cluster of VMs, single cloud, or multi cloud environments or any combination available.
[0055] FIG. 3A depicts the functional flow of the system 100. The database 108, contains the necessary task 1 information. The storage 112 (file storage) contains the centrally managed dataset repository where dataset 1 is present as one example of the dataset needed for task 1. In the fetch dataset request, Dataset 1 contains 7 features and 1,000 instances for example. When trainee executes the corresponding notebook commands on the thin clients (offline/local or online), the system 100 via the thin client fetches the train subset (size 500*7 and test subset (size 200* 7) uniquely mapped to user 1 (trainee1) based on one of the sampling techniques for data distribution and serves it to user (trainee) if locally required by user as dataset files. In the evaluation request, after the user completes the task, the user submits the test subset with their solution (size 200*7) for evaluation. During evaluation, the user's unique submission (size 200*7) is compared with the corresponding unique test solution from the dataset 1 using the common indices, and the score is computed for the user. So, this does not make multiple copies of the data even if a unique train-test datasets is needed to be evaluated for n users.
[0056] FIG. 3B depicts the role-based, process flow of the system 100 for user creation, segregated into various groups comprising the admin, the instructor, the trainee. The admin can create user groups, which have privileges assigned. The instructor plays multiple tasks of creating concepts, connecting them to specialized learning paths which holds courses and modules for learning and assessments for evaluations. Instructor can also create AI/ML based assessments which tracks active performance of users via real-time leaderboards, in case of requirement the submissions can be downloaded by instructor. Trainee can see the catalog of specializations and enroll according to their choice/requirement. Post enrollment they can learn from available content and keep track of their progress. The system this enables automatic generation of assessments across a variety of ML tasks. This includes support for programming tasks, ML Hackathons with configurable ML measures, low code/no code assessment etc. The system 100 can conduct unlimited assessments with ease mass personalizing questions and crowdsourcing answers to questions. This helps prevent plagiarism and prevents fraud by not giving an opportunity to do so in the first place. This also helps team crowdsource varied and new approaches/solutions to the ML/AI problem at hand.
[0057] Thus, the system 100 disclosed herein provides:
1. Scalable hands-on platform for AI learning and assessment, via the architecture for scaling ML/AI evaluations and enabling concurrency, with scaling possible online, or on-demand; both vertically and horizontally.
2. On demand provisioning of environments and infra required for ML/AI hands-on learning.
3. Dataset management for mass-personalization of datasets, wherein each user can get a different dataset or a data subset.
4. Automatic assessments that are flexible to work across AI/ML task types, while supporting innumerable types of ML/AI performance and benchmark metrics. Most existing AI/ML assessment platforms only evaluate syntactic correctness or correctness against a reference output.
5. Multiple ways of evaluation of ML assessments (as opposed to just coding assessments).
6. Authoring tool for ease of authoring ML/AI assessments reducing time to creation to few seconds from hours.
7. Assessments can be done in combination with assistive coding interfaces (GUI interactions for coding and AI assisted coding) to help leaners understand key concepts of ML/AL without syntax becoming a primary barrier.
8. Visual learning of underlying algorithms.
9. Datasets for ML and AI can be managed centrally on the cloud and distributed for use by learners in novel ways, allowing mass-customization of datasets for each learner.
10. Concept inventory hierarchical graph structure with dashboards for micro and macro analysis of skills.
11. Data types, ranging from primitive, derived, to complex data structures such as Data Frames and Tensors can be evaluated.
12. The datasets are provisioned to the users ( trainees) via evaluation notebooks directly with the support of the package. Datasets provisioned to each user can be a unique set of datapoints. And the user can work upon their own copy of datasets. Predictions based on datasets can vary for each user, hence sharing solutions as a malpractice does not help.
13. The system 100 can scale at large with both offline as well as online environments are supported. Only the evaluation part is done on the server based on output generated on the user’s environment.
14. The system 100 performs well in case of large dataset size as well. For example, even if the test data is of 100 MB, still transporting the predictions to the application server and evaluating it consumes less resources.
15. The system 100 supports variety of machine learning problems starting from working on structured datasets to image datasets.
16. The system 100 offers the training and assessment on a read–eval–print loop (REPL) environment, which is by nature interactive, allowing cell-by-cell election and viewing of output Thus, the system and method can work in both normal programming as well as REPL mode. REPL, which is well known in the art, provide an interactive environment to explore tools available in specific environments or programming languages.
[0058] Use case examples depicting the system flow of task creation and training for programming task, dataset task and a combinational task set by the instructor for the trainee are provided below.
USECASE1-Programming task
Instructor-side evaluation notebook (Faculty Notebook)
SET SESSION
In [ ]:
from evaluator import Session
Session.init()
In [ ]:
# Check your login
Session.login()
FETCH ASSESSMENT NOW
In [ ]:
from evaluator import evaluate, Test, File
In [ ]:
# Get the test methods that you have to pass
Test.get_test_methods()
In [ ]:
# Download the related files, you need to run this cell only once.
File.download_data()
PLAY NOW
In [ ]:
# problem 1
@evaluate
def backword(s):
""" input is a string of words (a sentence) the function
must return the string in the reverse order of words
for example, if s = 'Today is not a Holiday' then the function
must return 'Holiday a not is Today'
"""
back_str = None
return back_str
In [ ]:
backword("Today is not a Holiday", test="Backword")
In [ ]:
# problem 2
@evaluate
def min2front(lst):
""" input lst is a List of integers. the function should move the first occurence of the
minimum value in the list to the front, otherwise preserving the order of the
elements and return the lst.
e.g. : if input is the list [3, 8, 92, 4, 2, 17, 9] then the function should
return [2, 3, 8, 92, 4, 17, 9]
"""
new_lst = []
return new_lst
In [ ]:
min2front([3, 8, 92, 4, 2, 17, 9], test="Min 2 Front")
In [ ]:
# problem 3
@evaluate
def strincommon(lst1, lst2):
""" input is two lists of strings
output is a list that contains the strings that are common to both the
lists
e.g : if input lists are ['this','is','a', 'string'] and ['this', 'is', 'not']
then the function should return the list ['this', 'is']
"""
return ''
In [ ]:
strincommon(['this','is','a', 'string'], ['this', 'is', 'not'], test='String in common')
In [ ]:
# problem 4
@evaluate
def remove_adjacent(nums):
""" Given a list of numbers, return a list where
all adjacent == elements have been reduced to a single element,
so [1, 2, 2, 3] returns [1, 2, 3]. You may create a new list or
modify the passed in list."""
return ''
In [ ]:
remove_adjacent([1, 2, 2, 3], test="Remove Adjacent")
In [ ]:
# problem 5
@evaluate
def pivot(x,lst):
""" Write a function that takes a value x and a list lst, and returns
a list that contains the value x and all elements of lst such that all
values y in lst that are smaller than x come first, then the element x and
then the rest of the values in lst
For example, the output of pivot(3, [6, 4, 1, -1, 0, 7,5]) should be
[1,-1,0, 3, 6, 4, 7,5]
"""
return ''
In [ ]:
pivot(3, [6, 4, 1, -1, 0, 7,5], test='Pivot')
In [ ]:
# Submit the test before the stop time.
Test.submit()
In [ ]:
# Check your position in the leaderboard
Test.leaderboard(latest=False) # latest=True to see the latest leaderboard
In [ ]:
# To see only your submission scores
USERNAME = input("Enter your AIPlayground username:")
lbd = Test.leaderboard(all_trials=True)
lbd.loc[lbd.username==USERNAME]
Thank you :)
======== ADMIN SECTION BELOW ========
The notebook is be divided into two segments, the top half from the above cell becomes the user's view, and the bottom segment becomes the admin part with solutions.
SET SESSION
In [1]:
from admin import Session
Session.init()
INITIALIZING! Please wait for a while and login below..
In [2]:
# Check your login
Session.login()
Logging In..
.
.
Logged in with username: B25_1837855
Import the admin package as shown below :-
In [4]:
from admin import Task, File
from admin import concept_inventory as ci
Upload a dataset like this (Not Recommended! You can use existing datasets) :-
Note: Please do not upload a same dataset twice
In [5]:
# File.upload("height_vs_weight.csv")
Import Datasets like this :-
In [ ]:
from admin import datasets as dt
Let’s start creating the task.
In [ ]:
# # Take help on creating a task
# help(Task)
In [6]:
task = Task()
In [7]:
task.set_name("Python Assignment 4")
task.set_desc("Python Assignment 4.")
In [8]:
task.set_concepts(ci.Python)
Copy the notebook cells with solution below (Or just write the solution)
In [10]:
# problem 1
def backword(s):
""" input is a string of words (a sentence) the function
must return the string in the reverse order of words
for example, if s = 'Today is not a Holiday' then the function
must return 'Holiday a not is Today'
"""
list1=s.split()
list1.reverse()
newstr=" ".join(list1)
return newstr
# problem 2
def min2front(lst):
""" input lst is a List of integers. the function should move the first occurrence of the minimum value in the list to the front, otherwise preserving the order of the
elements and return the lst.
e.g : if input is the list [3, 8, 92, 4, 2, 17, 9] then the function should
return [2, 3, 8, 92, 4, 17, 9]
"""
if lst:
tmp=min(lst)
lst.remove(min(lst))
lst.insert(0,tmp)
return lst
else:
return lst
# problem 3
def strincommon(lst1, lst2):
""" input is two lists of strings
output is a list that contains the strings that are common to both the
lists
e.g : if input lists are ['this','is','a', 'string'] and ['this', 'is', 'not']
then the function should return the list ['this', 'is']
"""
new=[]
for first in lst1:
for second in lst2:
if first==second:
new.append(first)
break
return new
# problem 4
def remove_adjacent(nums):
""" Given a list of numbers, return a list where
all adjacent == elements have been reduced to a single element,
so [1, 2, 2, 3] returns [1, 2, 3]. You may create a new list or
modify the passed in list."""
length=len(nums)
i=0
while i x:
newlist.append(elem)
return newlist
Set test cases below :-
In [11]:
from sklearn.metrics import mean_squared_error
# Test method with test cases
task.make_test_cases(
backword,
test_cases = [
Task.test_case('Today is Holiday'),
Task.test_case('Why learn Python?'),
Task.test_case(''),
],
save = True,
method_name = "Backword",
max_mark = 25
)
task.make_test_cases(
min2front,
test_cases = [
Task.test_case([4,6,12,3,2]),
Task.test_case([]),
Task.test_case([1,2,3,4])
],
save = True,
method_name = "Min 2 Front",
max_mark = 25
)
task.make_test_cases(
strincommon,
test_cases = [
Task.test_case(['this','is','a', 'string'] , ['this', 'is', 'not']),
Task.test_case(['is','this','a', 'string'] , ['this', 'is', 'a', 'string']),
Task.test_case(['this','is','a', 'string'] , ['python', 'java', 'ruby'])
],
save = True,
method_name = "String in common",
max_mark = 25
)
task.make_test_cases(
remove_adjacent,
test_cases = [
Task.test_case([1, 2, 2, 3]),
Task.test_case([1, 2, 3, 2]),
Task.test_case([1,2, 2, 3, 3, 3]),
Task.test_case([])
],
save = True,
method_name = "Remove Adjacent",
max_mark = 25
)
task.make_test_cases(
pivot,
test_cases = [
Task.test_case(3, [6, 4, 1, -1, 0, 7,5]),
Task.test_case(-1,[5,6,7,8]),
Task.test_case(9,[1,2,7,4])
],
save = True,
method_name = "Pivot",
max_mark = 25
)
task.show_test_cases()
Function backword
Test method: Backword
Data: None
Target: None
Eval method: None
Max Mark: 25
In: {'args': ('Today is Holiday',), 'kwargs': {}, 'mark': 100}
Out: Holiday is Today
Mark: 100
In: {'args': ('Why learn Python?',), 'kwargs': {}, 'mark': 100}
Out: Python? learn Why
Mark: 100
In: {'args': ('',), 'kwargs': {}, 'mark': 100}
Out:
Mark: 100
------------------------------
Function min2front
Test method: Min 2 Front
Data: None
Target: None
Eval method: None
Max Mark: 25
In: {'args': ([2, 4, 6, 12, 3],), 'kwargs': {}, 'mark': 100}
Out: [2, 4, 6, 12, 3]
Mark: 100
In: {'args': ([],), 'kwargs': {}, 'mark': 100}
Out: []
Mark: 100
In: {'args': ([1, 2, 3, 4],), 'kwargs': {}, 'mark': 100}
Out: [1, 2, 3, 4]
Mark: 100
------------------------------
Function strincommon
Test method: String in common
Data: None
Target: None
Eval method: None
Max Mark: 25
In: {'args': (['this', 'is', 'a', 'string'], ['this', 'is', 'not']), 'kwargs': {}, 'mark': 100}
Out: ['this', 'is']
Mark: 100
In: {'args': (['is', 'this', 'a', 'string'], ['this', 'is', 'a', 'string']), 'kwargs': {}, 'mark': 100}
Out: ['is', 'this', 'a', 'string']
Mark: 100
In: {'args': (['this', 'is', 'a', 'string'], ['python', 'java', 'ruby']), 'kwargs': {}, 'mark': 100}
Out: []
Mark: 100
------------------------------
Function remove_adjacent
Test method: Remove Adjacent
Data: None
Target: None
Eval method: None
Max Mark: 25
In: {'args': ([1, 2, 3],), 'kwargs': {}, 'mark': 100}
Out: [1, 2, 3]
Mark: 100
In: {'args': ([1, 2, 3, 2],), 'kwargs': {}, 'mark': 100}
Out: [1, 2, 3, 2]
Mark: 100
In: {'args': ([1, 2, 3],), 'kwargs': {}, 'mark': 100}
Out: [1, 2, 3]
Mark: 100
In: {'args': ([],), 'kwargs': {}, 'mark': 100}
Out: []
Mark: 100
------------------------------
Function pivot
Test method: Pivot
Data: None
Target: None
Eval method: None
Max Mark: 25
In: {'args': (3, [6, 4, 1, -1, 0, 7, 5]), 'kwargs': {}, 'mark': 100}
Out: [1, -1, 0, 3, 6, 4, 7, 5]
Mark: 100
In: {'args': (-1, [5, 6, 7, 8]), 'kwargs': {}, 'mark': 100}
Out: [-1, 5, 6, 7, 8]
Mark: 100
In: {'args': (9, [1, 2, 7, 4]), 'kwargs': {}, 'mark': 100}
Out: [1, 2, 7, 4, 9]
Mark: 100
------------------------------
In [ ]:
task.set_time(start_time='2022-02-14 08:30:00', stop_time='2022-02-21 17:30:00')
Session.save_notebook()
In [10]:
task.save_or_update()
# # Or update only the test cases
# task.save_or_update(update_test_cases=True, update_notebook=False)
# # Or update only the notebook
# task.save_or_update(update_test_cases=False, update_notebook=True)
play@aip :: Task saved successfully
===X===
Trainee Notebook
Python Assignment 4
Concepts : Python
Description : Python Assignment 4.
Posted by : xyz Posted on : 2022-02-14 04:33:29
________________________________________
Name of Candidate : abc
Group : Ignite, Trainee
Start time : 2022-02-14 03:00:00+00:00Stop time : 2022-07-14 11:30:00+00:00
SET SESSION
In [ ]:
from evaluator import Session
Session.init()
In [ ]:
# Check your login
Session.login()
FETCH ASSESSMENT NOW
In [ ]:
from evaluator import evaluate, Test, File
In [ ]:
# Get the test methods that you have to pass
Test.get_test_methods()
In [ ]:
# Download the related files, you need to run this cell only once.
File.download_data()
PLAY NOW
In [ ]:
# problem 1
@evaluate
def backword(s):
""" input is a string of words (a sentence) the function
must return the string in the reverse order of words
for example, if s = 'Today is not a Holiday' then the function
must return 'Holiday a not is Today'
"""
back_str = None
return back_str
In [ ]:
backword("Today is not a Holiday", test="Backword")
In [ ]:
# problem 2
@evaluate
def min2front(lst):
""" input lst is a List of integers. the function should move the first occurence of the
minimum value in the list to the front, otherwise preserving the order of the
elements and return the lst.
e.g : if input is the list [3, 8, 92, 4, 2, 17, 9] then the function should
return [2, 3, 8, 92, 4, 17, 9]
"""
new_lst = []
return new_lst
In [ ]:
min2front([3, 8, 92, 4, 2, 17, 9], test="Min 2 Front")
In [ ]:
# problem 3
@evaluate
def strincommon(lst1, lst2):
""" input is two lists of strings
output is a list that contains the strings that are common to both the
lists
e.g : if input lists are ['this','is','a', 'string'] and ['this', 'is', 'not']
then the function should return the list ['this', 'is']
"""
return ''
In [ ]:
strincommon(['this','is','a', 'string'], ['this', 'is', 'not'], test='String in common')
In [ ]:
# problem 4
@evaluate
def remove_adjacent(nums):
""" Given a list of numbers, return a list where
all adjacent == elements have been reduced to a single element,
so [1, 2, 2, 3] returns [1, 2, 3]. You may create a new list or
modify the passed in list."""
return ''
In [ ]:
remove_adjacent([1, 2, 2, 3], test="Remove Adjacent")
In [ ]:
# problem 5
@evaluate
def pivot(x,lst):
""" Write a function that takes a value x and a list lst, and returns
a list that contains the value x and all elements of lst such that all
values y in lst that are smaller than x come first, then the element x and
then the rest of the values in lst
For example, the output of pivot(3, [6, 4, 1, -1, 0, 7,5]) should be
[1,-1,0, 3, 6, 4, 7,5]
"""
return ''
In [ ]:
pivot(3, [6, 4, 1, -1, 0, 7,5], test='Pivot')
In [ ]:
In [ ]:
# Submit the test before the stop time.
Test.submit()
In [ ]:
# Check your position in the leaderboard
Test.leaderboard(latest=False) # latest=True to see the latest leaderboard
In [ ]:
# To see only your submission scores
USERNAME = input("Enter your AIPlayground username:")
lbd = Test.leaderboard(all_trials=True)
lbd.loc[lbd.username==USERNAME]
Thank you :)
USECASE 2- Prediction task (dataset task)
Faculty Notebook
SET SESSION
In [ ]:
from evaluator import Session
Session.init()
In [ ]:
# Check your login
Session.login()
FETCH ASSESSMENT NOW
In [ ]:
from evaluator import evaluate, Test, File
In [ ]:
# Get the test methods that you have to pass
Test.get_test_methods()
In [ ]:
# Download the related files, you need to run this cell only once.
File.download_data()
PLAY NOW
Gender Prediction
In [ ]:
@evaluate
def predictions():
'''Find all predictions and save it in a new column `height` in the test dataset
:return test_df (pandas.DataFrame):
Updated DataFrame with added `height` column
'''
return
In [ ]:
predictions(test='Test Predictions')
In [ ]:
# Submit the test before the stop time.
Test.submit()
In [ ]:
# Check your position in the leaderboard
Test.leaderboard(latest=False) # latest=True to see the latest leaderboard
In [ ]:
# To see only your submission scores
USERNAME = input("Enter your AIPlayground username:")
lbd = Test.leaderboard(all_trials=True)
lbd.loc[lbd.username==USERNAME]
Thank you :)
======== ADMIN SECTION BELOW ========
The notebook will be divided into two segments, the top half from the above cell becomes the user's view, and the bottom segment becomes the admin part with solutions.
SET SESSION
In [1]:
from admin import Session
Session.init()
INITIALIZING! Please wait for a while and login below..
In [2]:
# Check your login
Session.login()
Logging In..
.
.
Logged in with username: B30_2065253
Import the admin package as shown below :-
In [3]:
from admin import Task, File
from admin import concept_inventory as ci
Upload a dataset like this (Not Recommended! You can use existing datasets) :-
Note: Please do not upload a same dataset twice
In [ ]:
#File.upload("data.csv")
Import Datasets like this:-
In [4]:
from admin import datasets as dt
Let us start creating the task.
In [5]:
task = Task()
In [6]:
task.set_name("Gender Prediction")
task.set_desc("Create a model to predict the gender on basic of height and weight.")
In [7]:
task.set_concepts(ci.Knn,ci.Pandas)
In [8]:
task.add_dataset(dt.DataCsv, train_bucket=0.75, target_cols=['Gender'], test_size=0.5, splits=0)
Copy the notebook cells with solution below (Or just write the solutions)
In [9]:
def prediction():
return
Set test cases below :-
In [10]:
from admin.metrics import accuracy_score
task.make_test_cases(
prediction,
save = True,
method_name = "Test Predictions",
max_mark = 100,
data = dt.DataCsv,
target = "Gender",
eval_method = accuracy_score
)
task.show_test_cases()
Function prediction
Test method: Test Predictions
Data: 11590
Target: Gender
Eval method: accuracy_score
Max Mark: 100
In: None
Out: None
Mark: 100
------------------------------
In [ ]:
task.set_time(start_time='2022-01-02 08:30:00', stop_time='2022-12-20 17:30:00')
Session.save_notebook()
In [ ]:
task.save_or_update()
# # Or update only the test cases
# task.save_or_update(update_test_cases=True, update_notebook=False)
# # Or update only the notebook
# task.save_or_update(update_test_cases=False, update_notebook=True)
===X===
¶ Faculty Notebook
Gender Prediction
Concepts : KNN, Pandas
Description : Create a model to predict the gender on basic of height and weight.
Posted by : xyz Posted on : 2022-02-17 08:16:10
________________________________________
Name of Candidate : abc
Group : Ignite, Trainee
Start time : 2022-01-02 03:00:00+00:00 Stop time : 2022-07-15 12:00:00+00:00
SET SESSION
In [ ]:
from evaluator import Session
Session.init()
In [ ]:
# Check your login
Session.login()
FETCH ASSESSMENT NOW
In [ ]:
from evaluator import evaluate, Test, File
In [ ]:
# Get the test methods that you have to pass
Test.get_test_methods()
In [ ]:
# Download the related files, you need to run this cell only once.
File.download_data()
PLAY NOW
Gender Prediction
In [ ]:
@evaluate
def predictions():
'''Find all predictions and save it in a new column `height` in the test dataset
:return test_df (pandas.DataFrame):
Updated DataFrame with added `height` column
'''
return
In [ ]:
predictions(test='Test Predictions')
In [ ]:
# Submit the test before the stop time.
Test.submit()
In [ ]:
# Check your position in the leaderboard
Test.leaderboard(latest=False) # latest=True to see the latest leaderboard
In [ ]:
# To see only your submission scores
USERNAME = input("Enter your AIPlayground username:")
lbd = Test.leaderboard(all_trials=True)
lbd.loc[lbd.username==USERNAME]
Thank you :)
USECASE 3 – combinational task {Programming task + Prediction (dataset) task}
Faculty Notebook
1.1.1.1 SET SESSION
In [ ]:
from evaluator import Session
Session.init()
In [ ]:
# Check your login
Session.login()
1.1.1.2 FETCH ASSESSMENT NOW
In [ ]:
from evaluator import evaluate, Test, File
In [ ]:
# Get the test methods that you have to pass
Test.get_test_methods()
In [ ]:
# Download the related files, you need to run this cell only once.
File.download_data()
1.2 Flower Species Prediction
In [ ]:
@evaluate
def flower_species_prediction():
'''Find all predictions and save it in a new column `species` in the test dataset
:return test_df (pandas.DataFrame):
Updated DataFrame with added `species` column
'''
return
In [ ]:
flower_species_prediction(test='Test Predictions')
1.3 Filter word
In [ ]:
def filter_word(ls,l):
"""
define a function filter_word using list comprehension to filter all the words greater than the given length.
example:
input: ['aaa','sdfgh','aa','sfdd','b'], 3
output: ['sdfgh','sfdd']
"""
return None
In [ ]:
filter_word(['aaa','sdfgh','aa','sfdd','b'], 3, test='Test Filter word')
In [ ]:
# Submit the test before the stop time.
Test.submit()
In [ ]:
# Check your position in the leaderboard
Test.leaderboard(latest=False) # latest=True to see the latest leaderboard
In [ ]:
# To see only your submission scores
USERNAME = input("Enter your AIPlayground username:")
lbd = Test.leaderboard(all_trials=True)
lbd.loc[lbd.username==USERNAME]
1.3.1 Thank you :)
2. ======== ADMIN SECTION BELOW ========
The notebook is be divided into two segments, the top half from the above cell becomes the user's view, and the bottom segment becomes the admin part with solutions.
2.1.1.1 SET SESSION
In [ ]:
from admin import Session
Session.init()
In [ ]:
# Check your login
Session.login()
2.1.2 Import the admin package as shown below :-
In [ ]:
from admin import Task, File
from admin import concept_inventory as ci
2.1.3 Upload a dataset like this (Not Recommended! You can use existing datasets) :-
2.1.3.1 Note: Please do not upload a same dataset twice
#File.upload("data.csv")
2.1.4 Import Datasets like this :-
In [ ]:
from admin import datasets as dt
2.1.5 Let us start creating the task.
In [ ]:
task = Task()
In [ ]:
task.set_name("Flower Species Prediction")
task.set_desc("Create a model to predict the species of a flower based on their physical attributes")
In [ ]:
task.set_concepts(ci.Knn,ci.Pandas,ci.Python)
In [ ]:
task.add_dataset(dt.Iris, train_bucket=0.75, target_cols=['species'], test_size=0.5, splits=0)
2.1.6 Copy the notebook cells with solution below (Or just write the solutions)
In [ ]:
def flower_species_prediction():
return
In [ ]:
def filter_word(ls,l):
return [w for w in ls if len(w)>l]
2.1.7 Set test cases below :-
In [ ]:
from admin.metrics import accuracy_score
task.make_test_cases(
flower_species_prediction,
save = True,
method_name = "Test Predictions",
max_mark = 100,
data = dt.DataCsv,
target = "species",
eval_method = accuracy_score
)
task.make_test_cases(
filter_word,
test_cases = [
Task.test_case(['aa','aaa','aaaa','a'],2, _mark=10),
Task.test_case(['aa','aaa','aaaa','a'],3, _mark=10)
],
save = True,
method_name = "Test Filter word",
max_mark = 20
)
task.show_test_cases()
In [ ]:
task.set_time(start_time='2022-07-26 08:30:00', stop_time='2022-07-27 17:30:00')
Session.save_notebook()
In [ ]:
task.save_or_update()
# # Or update only the test cases
# task.save_or_update(update_test_cases=True, update_notebook=False)
# # Or update only the notebook
# task.save_or_update(update_test_cases=False, update_notebook=True)
Trainee Notebook
L-Python
Concepts : KNN, Pandas, Python
Description : Create a model to predict the species of a flower based on their physical attributes
Posted by : xyz Posted on : 2022-07-26 14:30:00
________________________________________
Name of Candidate : abc
Group : Ignite, Trainee
Start time : 2022-07-26 08:30:00+00:00Stop time : 2022-07-27 17:30:00+00:00
2.1.7.1 SET SESSION
In [ ]:
from evaluator import Session
Session.init()
In [ ]:
# Check your login
Session.login()
2.1.7.2 FETCH ASSESSMENT NOW
In [ ]:
from evaluator import evaluate, Test, File
In [ ]:
# Get the test methods that you have to pass
Test.get_test_methods()
In [ ]:
# Download the related files, you need to run this cell only once.
File.download_data()
2.1.7.3 PLAY NOW
2.2 Flower Species Prediction
In [ ]:
@evaluate
def flower_species_prediction():
'''Find all predictions and save it in a new column `species` in the test dataset
:return test_df (pandas.DataFrame):
Updated DataFrame with added `species` column
'''
return
In [ ]:
flower_species_prediction(test='Test Predictions')
2.3 Filter word
In [ ]:
def filter_word(ls,l):
"""
define a function filter_word using list comprehension to filter all the words greater than the given length.
example:
input: ['aaa','sdfgh','aa','sfdd','b'], 3
output: ['sdfgh','sfdd']
"""
return None
In [ ]:
filter_word(['aaa','sdfgh','aa','sfdd','b'], 3, test='Test Filter word')
In [ ]:
# Submit the test before the stop time.
Test.submit()
In [ ]:
# Check your position in the leaderboard
Test.leaderboard(latest=False) # latest=True to see the latest leaderboard
In [ ]:
# To see only your submission scores
USERNAME = input("Enter your AIPlayground username:")
lbd = Test.leaderboard(all_trials=True)
lbd.loc[lbd.username==USERNAME]
[0059] FIG. 4A depicts process steps and FIG. 4B depicts an architectural overview, for an online mode and an offline mode of evaluation that can be opted by the trainees by via the system of FIG. 1, in accordance with some embodiments of the present disclosure. FIG. 4A depicts the trainee's workflow for taking assessments on online and offline platforms. The online environment is dynamically provisioned on-demand with the required dependencies to take assessments, while the offline environment provides the flexibility to users to use their own resources. The sequential steps of the flow diagram of FIG. 4A depicts the trainee's entire journey to complete available tasks in their choice of environment .
[0060] FIG. 4B depicts High Level Design (HLD) of the system 100 used to assess and track trainee learning and progress in the assessments. The evaluable notebooks contains methods for trainees to securely start and evaluate their assessments, wherein the client package is a python based utility, exposing the necessary APIs, and the Play backend is base application to drive the assessments and track progress.
[0061] FIGS. 5A and 5B depict dynamic provisioning of parameters based on execution environment comprising the offline mode and the online mode respectively, in accordance with some embodiments of the present disclosure. FIG. 5A depicts the steps involved to provision offline environment to take assessments. An environment manger implemented by the one or more hardware processors handles the seamless setup of the environment and generates a go-ahead signal, which then transfers the required files and evaluates the assessments. A control plane, implemented by the one or more hardware processors is the key component to manage infrastructure, scale on demand and ensures high availability FIG. 5B depicts a client (thin client) to cloud (application server) communication through the control plane to dynamically provision the environment and facilitate assessment and it's evaluation. The environment complexities are abstracted from the user and the on-demand environment is available to take assessments on cloud. The provisioned environment creates a custom sandbox on cloud, which is equipped with necessary dependencies and packages.
[0062] The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims.
[0063] It is to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g., any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g., hardware means like e.g., an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g., an ASIC and an FPGA, or at least one microprocessor and at least one memory with software processing components located therein. Thus, the means can include both hardware means, and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software means. Alternatively, the embodiments may be implemented on different hardware devices, e.g., using a plurality of CPUs.
[0064] The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various components described herein may be implemented in other components or combinations of other components. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
[0065] The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
[0066] Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
[0067] It is intended that the disclosure and examples be considered as exemplary only, with a true scope of disclosed , Claims:We Claim:
1. A processor implemented method (200) for training and evaluation of Artificial Intelligence (AI) and Machine Learning (ML) capabilities, the method comprising:
providing (202), by one or more hardware processors, an authenticated access to an instructor, wherein the instructor defines AI or ML training and assessment approach for a plurality of trainees;
enabling the instructor (204), by the one or more hardware processors, to uniquely identify a type of a task to be assigned to the plurality of trainees, wherein the type of task comprises one of a dataset task, a programming task, and a combinational task thereof;
identifying (206), by the one or more hardware processors, in accordance with the type of the task, a dataset for each trainee, from among a plurality of datasets from a centrally managed repository, wherein a dataset distribution approach is managed centrally and enables one of:
each trainee receiving a unique dataset from among the plurality of datasets,
each group of trainees from among a plurality of groups generated from the plurality of trainees receiving a unique subset of dataset among the plurality of datasets, and
each of the plurality of trainees receiving same dataset;
defining (208), by the one or more hardware processors, a plurality of presets associated with the dataset using Automated Machine Learning (AutoML) techniques, the plurality of presets comprising:
a) splitting of the dataset into a training set shared with each trainee, a test set shared with each trainee, and a test solution set shared with the instructor comprising actual values for outputs in target columns, against which output predictions of each trainee are evaluated for a ML problem to be performed for the type of the task identified for each trainee;
b) defining independent and dependent features of the dataset to be used by the trainee for the ML problem;
c) determining evaluation metrics to be applied for output predictions of the ML problem executed by each trainee; and
d) defining a scoring pattern for the ML problem against the evaluation metrics identified;
generating (210), by the one or more hardware processors, an instructor-side evaluation notebook in accordance with a task design planned by the instructor for the task for each trainee, wherein the task design splits the task into a plurality of subtasks and specifies a plurality of values corresponding to the defined plurality of presets;
generating (212), by the one or more hardware processors, an evaluation trainee notebook for each trainee, defining a framework for executing the plurality of subtasks of the task specified by the task design;
distributing to each trainee (214), by the one or more hardware processors, the training set and the test set obtained from the dataset in accordance with the plurality of presets;
publishing (216), by the one or more hardware processors, the task in accordance with the type identified for each trainee by mapping the task or corresponding plurality of subtasks to one or more nodes in a concept inventory comprising a hierarchical node structure representing ML concepts, wherein inclination and learning of each trainee is analyzed based on the task or the plurality of subtasks completed by each trainee and corresponding mapping to the ML concepts; and
dynamically provisioning (218), by the one or more hardware processors, a plurality of parameters based on execution environment of each trainee, and wherein the execution environment comprises an offline mode and an online mode to be opted by each trainee.
2. The method as claimed in claim 1, wherein the offline mode scales a number of trainees that are trained and evaluated by enabling each trainee to install the evaluation trainee notebook on a local system for evaluation of the exercise received via an application installed on the local system, wherein the application comprises the functions and protocols to enable authenticated interaction with an application server for specific communication comprising testing authenticity of the evaluation trainee notebook for each trainee and the task, fetching questions and data for the task and saving it locally using auth key, sending the locally run cell-wise step-by-step programming output to the application server, fetching a score in accordance with the scoring pattern after evaluation is done on the application server, obtaining the scores on demand, and submitting and saving a final status of the evaluation trainee notebook with entire code and details to the application server.
3. The method as claimed in claim 1, wherein the online mode comprises providing each trainee, with a workspace via a dedicated instance through a thin client with predefined resource availability per user, wherein an online environment is provisioned for each trainee with all dependencies and resources dedicated to a single trainee, and wherein packages and the evaluation notebook for each trainee to work is prior present in the workspace.
4. The method as claimed in claim 1,wherein customization is enabled to be applied over the plurality of values corresponding to the plurality of presets during the task design.
5. A system (100) for training and evaluation of Artificial Intelligence (AI) and Machine Learning (ML) capabilities, the system (100) comprising:
a memory (102) storing instructions;
one or more Input/Output (I/O) interfaces (106); and
one or more hardware processors (104) coupled to the memory (102) via the one or more I/O interfaces (106), wherein the one or more hardware processors (104) are configured by the instructions to:
provide, an authenticated access to an instructor, wherein the instructor defines AI or ML training and assessment approach for a plurality of trainees;
enable the instructor to uniquely identify a type of a task to be assigned to the plurality of trainees, wherein the type of task comprises one of a dataset task, a programming task, and a combinational task thereof;
identify, in accordance with the type of the task, a dataset for each trainee, from among a plurality of datasets from a centrally managed repository, wherein a dataset distribution approach is managed centrally and enables one of:
each trainee receiving a unique dataset from among the plurality of datasets,
each group of trainees from among a plurality of groups generated from the plurality of trainees receiving a unique subset of dataset among the plurality of datasets, and
each of the plurality of trainees receiving same dataset;
define, a plurality of presets associated with the dataset using Automated Machine Learning (AutoML) techniques, the plurality of presets comprising:
a) splitting of the dataset into a training set shared with each trainee, a test set shared with each trainee, and a test solution set shared with the instructor comprising actual values for outputs in target columns, against which output predictions of each trainee are evaluated for a ML problem to be performed for the type of the task identified for each trainee;
b) defining independent and dependent features of the dataset to be used by the trainee for the ML problem;
c) determining evaluation metrics to be applied for output predictions of the ML problem executed by each trainee; and
d) defining a scoring pattern for the ML problem against the evaluation metrics identified;
generate, an instructor-side evaluation notebook in accordance with a task design planned by the instructor for the task for each trainee, wherein the task design splits the task into a plurality of subtasks and specifies a plurality of values corresponding to the defined plurality of presets;
generate, an evaluation trainee notebook for each trainee, defining a framework for executing the plurality of subtasks of the task specified by the task design;
distribute, to each trainee, the training set and the test set obtained from the dataset in accordance with the plurality of presets;
publish, the task in accordance with the type identified for each trainee by mapping the task or corresponding plurality of subtasks to one or more nodes in a concept inventory comprising a hierarchical node structure representing ML concepts, wherein inclination and learning of each trainee is analyzed based on the task or the plurality of subtasks completed by each trainee and corresponding mapping to the ML concepts; and
dynamically provision, a plurality of parameters based on execution environment of each trainee, and wherein the execution environment comprises an offline mode and an online mode to be opted by each trainee.
6. The system (100) as claimed in claim 5, wherein the offline mode scales a number of trainees that are trained and evaluated by enabling each trainee to install the evaluation trainee notebook on a local system for evaluation of the exercise received via an application installed on the local system, wherein the application comprises the functions and protocols to enable authenticated interaction with an application server for specific communication comprising testing authenticity of the evaluation trainee notebook for each trainee and the task, fetching questions and data for the task and saving it locally using auth key, sending the locally run cell-wise step-by-step programming output to the application server, fetching a score in accordance with the scoring pattern after evaluation is done on the application server, obtaining the scores on demand, and submitting and saving a final status of the evaluation trainee notebook with entire code and details to the application server.
7. The system (100) as claimed in claim 5, wherein the online mode comprises providing each trainee, with a workspace via a dedicated instance through a thin client with predefined resource availability per user, wherein an online environment is provisioned for each trainee with all dependencies and resources dedicated to a single trainee, and wherein packages and the evaluation notebook for each trainee to work is prior present in the workspace.
8. The system (100) as claimed in claim 5,wherein customization is enabled to be applied over the plurality of values corresponding to the plurality of presets during the task design.
Dated this 15th Day of September 2022
Tata Consultancy Services Limited
By their Agent & Attorney
(Adheesh Nargolkar)
of Khaitan & Co
Reg No IN-PA-1086
| # | Name | Date |
|---|---|---|
| 1 | 202221052794-STATEMENT OF UNDERTAKING (FORM 3) [15-09-2022(online)].pdf | 2022-09-15 |
| 2 | 202221052794-REQUEST FOR EXAMINATION (FORM-18) [15-09-2022(online)].pdf | 2022-09-15 |
| 3 | 202221052794-FORM 18 [15-09-2022(online)].pdf | 2022-09-15 |
| 4 | 202221052794-FORM 1 [15-09-2022(online)].pdf | 2022-09-15 |
| 5 | 202221052794-FIGURE OF ABSTRACT [15-09-2022(online)].pdf | 2022-09-15 |
| 6 | 202221052794-DRAWINGS [15-09-2022(online)].pdf | 2022-09-15 |
| 7 | 202221052794-DECLARATION OF INVENTORSHIP (FORM 5) [15-09-2022(online)].pdf | 2022-09-15 |
| 8 | 202221052794-COMPLETE SPECIFICATION [15-09-2022(online)].pdf | 2022-09-15 |
| 9 | 202221052794-Proof of Right [25-11-2022(online)].pdf | 2022-11-25 |
| 10 | Abstract1.jpg | 2022-11-29 |
| 11 | 202221052794-FORM-26 [29-11-2022(online)].pdf | 2022-11-29 |
| 12 | 202221052794-FER.pdf | 2025-06-06 |
| 13 | 202221052794-FER_SER_REPLY [18-11-2025(online)].pdf | 2025-11-18 |
| 14 | 202221052794-DRAWING [18-11-2025(online)].pdf | 2025-11-18 |
| 15 | 202221052794-CLAIMS [18-11-2025(online)].pdf | 2025-11-18 |
| 1 | Search_Strategy_MatrixE_08-01-2025.pdf |