Abstract: A typical/conventional job scheduler involves job creation by a user, submission of created job to scheduler for running the job and generating results upon job execution wherein execution is performed in a batch mode on different job scheduler tools. However, these approaches become sub-optimal when run on GPU resources for deep-learning models related training. Embodiments of present disclosure provide system and method that implement a source code controller that automatically creates (i) job based on code obtained from a user, and (ii) a docker container based on configuration and environment details obtained from the user. The system further executes the created job from a corresponding queue in a graphics processing unit (GPU) and captures resource utilization metrics associated with utilization of the GPU wherein an output associated with the job is generated at end of job execution. System further generates deep-learning models and an analytical report based on the output. [To be published with FIG. 3]
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION (See Section 10 and Rule 13)
Title of invention:
SCHEDULING JOBS ON GRAPHICAL PROCESSING UNIT BASED SYSTEMS AND GENERATING MACHINE LEARNING MODELS
THEREOF
Applicant
Tata Consultancy Services Limited A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th floor,
Nariman point, Mumbai 400021,
Maharashtra, India
Preamble to the description
The following specification particularly describes the invention and the manner in which it is to be performed.
TECHNICAL FIELD [001] The disclosure herein generally relates to job scheduling, and, more particularly, to scheduling jobs on graphical processing unit-based systems and generating machine learning models thereof.
BACKGROUND
[002] Job scheduling is an age-old concept and is still relevant till today as it not only helps end users to maximize resource utilization but also enables to reduce operating costs. Over the last half a decade Machine learning (ML), Artificial Intelligence (AI), Deep Learning (DL), etc. have been trending. All these approaches have one thing in common that is significant compute resources where training can run for hours and sometime even for days. With limited hardware resources meeting the demand of researchers/ML developers is very difficult and that’s where job scheduling comes to rescue.
[003] Job scheduling concept has been there for many decades and over the period of time there has been many enhancement in terms of what could be a best scheduling algorithm like FIFO, priority scheduling, shortest job first, etc. but the core concept of scheduler has remained same. A typical job scheduler involves job creation by a user, submission of the created job to a scheduler, running the created job, and generating results upon job execution. The above approach for executing jobs is performed in a batch mode on different job scheduler tools. However, these approaches become sub-optimal when run on GPU resources for ML, AI or DL related training.
SUMMARY [004] Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems. For example, in one aspect, there is provided a processor implemented method for scheduling jobs on graphical processing unit-based systems and generating machine learning models thereof. The method comprises obtaining, via a source code controller executed by
one or more hardware processors, a programming language (PL) code from a user; creating, via the source code controller executed by the one or more hardware processors, a job using the obtained PL code; queueing, via the one or more hardware processors, the created job for execution; dynamically creating, via the one or more hardware processors, based on configuration and environment details obtained from the user, a docker container; executing, via the one or more hardware processors, by using the dynamically created docker container and training data comprised in the memory, the created job based on instructions comprised in the obtained PL code, wherein the created job is executed in a graphics processing unit (GPU) of a server system; capturing, via the one or more hardware processors, during execution of the created job execution, one or more resource utilization metrics associated with the graphics processing unit (GPU) utilization and obtaining an output associated with the created job at the end of execution of the created job; and generating, via the one or more hardware processors, a machine learning (ML) model and an analytical report, based on the output associated with the created job being executed, wherein the analytical report comprises information pertaining to the one or more resource utilization metrics associated with graphics processing unit (GPU) utilization.
[005] In an embodiment, during the execution of the created job, one or more running logs associated with the created job are monitored, wherein the one or more running logs comprise (i) amount of training completion by the machine learning model, and (ii) at least a partial output associated with training of the machine learning model.
[006] In an embodiment, the configuration and environment details comprise information on at least one of a type of queue job, an execution information pertaining to the dynamically created docker container, and a set of scripts for executing the PL code.
[007] In an embodiment, the one or more resource utilization metrics comprise at least one of (i) an amount of GPU memory consumption, (ii) amount of GPU processor consumption, (iii) an amount of storage drive being utilized, and
(iv) central processing unit (CPU) and random access memory (RAM) being utilized.
[008] In another aspect, there is provided a system for scheduling jobs on graphical processing unit-based systems and generating machine learning models thereof. The system comprises: a memory storing instructions and a source code controller; one or more communication interfaces; and one or more hardware processors coupled to the memory via the one or more communication interfaces, wherein the one or more hardware processors are configured by the instructions to: obtain, via the source code controller comprised in the memory (102), a programming language (PL) code from a user; create, via the source code controller, a job using the obtained PL code; queue the created job for execution; dynamically create a docker container based on configuration and environment details obtained from the user; execute, by using the dynamically created docker container and training data comprised in the memory, the created job based on instructions comprised in the obtained PL code, wherein the created job is executed in a graphics processing unit (GPU) of a server system; capture, during execution of the created job, one or more resource utilization metrics associated with the graphics processing unit (GPU) utilization and obtaining an output associated with the created job at the end of the execution of the created job; and generate a machine learning (ML) model and an analytical report based on the output associated with the created job, wherein the analytical report comprises information pertaining to the one or more resource utilization metrics associated with graphics processing unit (GPU) utilization.
[009] In an embodiment, during the execution of the created job, one or more running logs associated with the created job are monitored, wherein the one or more running logs comprise (i) amount of training completion by the machine learning model, and (ii) at least a partial output associated with training of the machine learning model.
[010] In an embodiment, the configuration and environment details comprise information on at least one of a type of queue job, an execution
information pertaining to the dynamically created docker container, and a set of scripts for executing the PL code.
[011] In an embodiment, the one or more resource utilization metrics comprise at least one of (i) an amount of GPU memory consumption, (ii) amount of GPU processor consumption, (iii) an amount of storage drive being utilized, and (iv) central processing unit (CPU) and random access memory (RAM) being utilized.
[012] In yet another embodiment, there is provided a computer program product comprising a non-transitory computer readable medium having a computer readable program embodied therein, wherein the computer readable program, when executed on a computing device causes the computing device to schedule jobs on graphical processing unit-based systems and generate machine learning models thereof, by obtaining, via a source code controller, a programming language (PL) code from a user; creating, via the source code controller, a job using the obtained PL code; queueing the created job for execution; dynamically creating a docker contained based on configuration and environment details obtained from the user; executing, by using the dynamically created docker container and training data comprised in a memory, the created job based on instructions comprised in the obtained PL code, wherein the created job is executed in a graphics processing unit (GPU) of a server system; capturing, during execution of the created job, one or more resource utilization metrics associated with the graphics processing unit (GPU) utilization and obtaining an output associated with the created job at the end of the execution of the created job; and generating one or more models (e.g., machine learning (ML) model(s), Artificial Intelligence (AI), Deep Learning (DL), and the like) and an analytical report, based on the output associated with the created job, wherein the analytical report comprises information pertaining to the one or more resource utilization metrics associated with graphics processing unit (GPU) utilization.
[013] In an embodiment, during the execution of the created job, one or more running logs associated with the created job are monitored, wherein the one or more running logs comprise (i) amount of training completion by the machine
learning model, and (ii) at least a partial output associated with training of the machine learning model.
[014] In an embodiment, the configuration and environment details comprise information on at least one of a type of queue job, an execution information pertaining to the dynamically created docker container, and a set of scripts for executing the PL code.
[015] In an embodiment, the one or more resource utilization metrics comprise at least one of (i) an amount of GPU memory consumption, (ii) amount of GPU processor consumption, (iii) an amount of storage drive being utilized, and (iv) central processing unit (CPU) and random access memory (RAM) being utilized.
[016] It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS
[017] The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles:
[018] FIG. 1 depicts a system for scheduling and executing jobs and generating machine learning models thereof, in accordance with an embodiment of the present disclosure.
[019] FIG. 2 depicts an architecture of the system of FIG. 1 for scheduling and executing jobs and generating machine learning models thereof, in accordance with an embodiment of the present disclosure.
[020] FIG. 3 depicts an exemplary flow chart illustrating a method for scheduling and executing jobs and generating machine learning models thereof, using the system of FIG. 1, in accordance with an embodiment of the present disclosure.
[021] FIG. 4 depicts a graphical representation illustrating amount of Graphics Processing Unit (GPU) memory consumption and amount of GPU
processor consumption, in accordance with an embodiment of the present disclosure.
[022] FIG. 5 depicts a graphical representation illustrating central processing unit (CPU) being utilized, in accordance with an example embodiment of the present disclosure.
[023] FIG. 6 depicts a graphical representation illustrating random access memory (RAM) being utilized, in accordance with an example embodiment of the present disclosure.
[024] FIG. 7 depicts a graphical representation illustrating partial output associated with training of the machine learning model, in accordance with an example embodiment of the present disclosure.
[025] FIG. 8 depicts an exemplary graphical representation of an analytical report generated by the system of FIGS. 1-2, in accordance with an embodiment of the present disclosure.
DETAILED DESCRIPTION OF EMBODIMENTS [026] Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the scope of the disclosed embodiments. It is intended that the following detailed description be considered as exemplary only, with the true scope being indicated by the following claims.
[027] As mentioned above, job scheduling is an age-old concept and is still relevant till today as it not only helps end users to maximize resource utilization but also enables to reduce operating costs. Approaches such as Machine learning (ML), Artificial Intelligence (AI), Deep Learning (DL), etc. have been trending and currently being utilized for training. A typical job scheduler involves job creation by a user, submission of the created job to a scheduler, running the created job, and
generating results upon job execution. The above approach for executing jobs is performed in a batch mode on different job scheduler tools. However, these approaches become sub-optimal when run on GPU resources for ML, AI or DL related training. This is mainly due to factors: 1) Try-Fail-Optimize-Repeat approach, 2) Change of frameworks, 3) lack of knowledge on job scheduling tools, 4) manual intervention(s) required for resource allocation, 5) traceability, 6) auditing and monitoring, 7) hardware independence, 8) common datasets, and the like to name a few.
[028] In the Try-Fail-Optimize-Repeat approach, most of the DL/ML training works based on hyper parameters provided by an end user where model(s) are trained on same dataset by changing parameters in algorithm frequently to improve end results. This requires frequent change in the code, creation of a fresh batch job every time which is time consuming process and further is inefficient because it is uncertain whether a new parameter performs as per expectations or is likely to fail.
[029] In change of frameworks approach, an end user such as a programmer switches between multiple frameworks or a hybrid framework setup is utilized for getting optimized results. However, this requires to setup everything from scratch owing to more time and operating and infrastructure costs.
[030] Further, most of the time programmers are from core programming or mathematics background and may not possess knowledge on specialize scheduler tools. Therefore, learning these tools becomes another task. Furthermore, many of the specialized job scheduling tools require manual intervention to define resources per job and allocation has be done before job gets into the queue. In other words, manual interventions are required for resource allocation.
[031] In a further aspect, once job batches are completed, results are obtained from the system for analysis. In real world scenarios such as DL/ML problems, many times programmer need to compare their result against different hyper parameters. At that point in time getting the old batch’s hyper parameter details or changes is a daunting task as one has to go through the entire code and
this may at times require re-running the entire batch just to confirm the result of an older run.
[032] In yet further aspect, typically, job scheduling tools provide auditing or monitoring. In large-scale enterprises, auditing is needed to check information on GPU resource utilization for training and what is the ROI (return on investment) as GPU resources are expensive in terms of cost. On the other hand, monitoring tools need additional capabilities rather than just monitoring the resources, such as play back for older batch runs without much of hassle so that programmer can understand behavior of algorithm and improve on it. For such cases, programmer must be dependent on infrastructure monitoring team to obtain utilization data.
[033] Other factor is hardware independence, wherein while working on a DL problem, there are times when developers tend to reduce the training data size to quickly verify behavior of a trained model. These include minor tweaks in program code to test certain aspects of training. Waiting for job execution in such scenarios can be painful and time consuming as changing the batch configuration cannot be foreseen in advance or realized up front.
[034] Moreover, in typical ML/DL approaches, DL jobs require huge volume of training dataset such as but not limited to, audio, images, video, 3D objects, text, etc. However, problem arises when the job scheduling tools do not have the capabilities to identify redundant data and filter these. Such redundancy in data contribute towards consumption of additional memory storage space.
[035] Considering the above technical problems, embodiments of the present disclosure provide systems and methods that implement a job scheduler which can support DL/ML based workload on GPU enabled systems. More specifically, the present disclosure implements various components of the system for creation of an effective job scheduler that is developer centric and addresses the above technical problems faced in real-world scenarios by various stakeholders. In the present disclosure, program code is checked-in and dataset is obtained, wherein a job is created and queued in for execution. Inputs pertaining to configuration and environment are also obtained, using which the system dynamically creates a docker container. The job is then executed in a GPU using the created docker
container and the training dataset, wherein during or upon completion of the job execution, resource utilization metrics associated with the GPU and output of job execution are captured. The system then generates one or more machine learning models and an analytical report based on the output from the job execution.
[036] Referring now to the drawings, and more particularly to FIGS. 1 through 8, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method.
[037] FIG. 1 depicts a system 100 for scheduling and executing jobs and generating machine learning models thereof, in accordance with an embodiment of the present disclosure. In an embodiment, the system 100 includes one or more hardware processors 104, communication interface device(s) or input/output (I/O) interface(s) 106 (also referred as interface(s)), and one or more data storage devices or memory 102 operatively coupled to the one or more hardware processors 104. The one or more processors 104 may be one or more software processing components and/or hardware processors. In an embodiment, the hardware processors can be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor(s) is/are configured to fetch and execute computer-readable instructions stored in the memory. In an embodiment, the system 100 can be implemented in a variety of computing systems, such as laptop computers, notebooks, hand-held devices, workstations, mainframe computers, servers, a network cloud and the like.
[038] The I/O interface device(s) 106 can include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like and can facilitate multiple communications within a wide variety of networks N/W and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. In an
embodiment, the I/O interface device(s) can include one or more ports for connecting a number of devices to one another or to another server.
[039] The memory 102 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. In an embodiment, a database 108 is comprised in the memory 102, wherein the database 108 comprises datasets such as audio, images, video, three-dimensional (3D) objects, text, etc.
[040] The information stored in the database 108 may further comprise program code obtained from user, job created using the program code, configuration and environment details, docker containers for each application scenario, job scheduled details, job execution details, resource utilization metrics of GPU executing the jobs, machine learning models generated using the output of job execution, analytical reports and the like. The information stored in the database 108 (or memory 102) may further comprise running logs associated with the job being executed, the one or more running logs comprise (i) amount of training completion by a machine learning model, and (ii) at least a partial output associated with training of the machine learning model.
[041] The memory 102 further comprises (or may further comprise) information pertaining to input(s)/output(s) of each step performed by the systems and methods of the present disclosure. In other words, input(s) fed at each step and output(s) generated at each step are comprised in the memory 102 and can be utilized in further processing and analysis.
[042] FIG. 2, with reference to FIG. 1, depicts an architecture of the system 100 of FIG. 1 for scheduling and executing jobs and generating machine learning models thereof, in accordance with an embodiment of the present disclosure.
[043] FIG. 3, with reference to FIGS. 1-2, depicts an exemplary flow chart illustrating a method for scheduling and executing jobs and generating machine
learning models thereof, using the system 100 of FIG. 1, in accordance with an embodiment of the present disclosure. In an embodiment, the system(s) 100 comprises one or more data storage devices or the memory 102 operatively coupled to the one or more hardware processors 104 and is configured to store instructions for execution of steps of the method by the one or more processors 104. The steps of the method of the present disclosure will now be explained with reference to components of the system 100 of FIG. 1, the block diagram/architecture of the system 100 of FIG. 2, the flow diagram as depicted in FIG. 3 and diagrams of FIGS. 4 through 8. In an embodiment, at step 202 of the present disclosure, the one or more hardware processors 104 obtain, via a source code controller of FIG. 2, a programming language (PL) code from a user. Below is an illustrative PL code fed as the input to the system:
#demo.py
import os
import torch as t
from utils.Config import opt
from models.faster_rcnn_vgg16 import FasterRCNNVGG16
from trainer import FasterRCNNTrainer
from data.data_utils import read_image
from utils.vis_tool import vis_bbox
from utils import array_tool as at
from data.dataset import RSNADataset, inverse_normalize
from torch.utils import data as data_
from tqdm import tqdm
# Monkey-patch because I trained with a newer version.
# This can be removed once PyTorch 0.4.x is out.
# See https://discuss.pytorch.org/t/question-about-rebuild-tensor-v2/14560 import torch._utils
try:
torch._utils._rebuild_tensor_v2 except AttributeError:
def _rebuild_tensor_v2(storage, storage_offset, size, stride,
requires_grad, backward_hooks):
tensor = torch._utils._rebuild_tensor(storage, storage_offset, size, stride)
tensor.requires_grad = requires_grad tensor._backward_hooks = backward_hooks return tensor torch._utils._rebuild_tensor_v2 = _rebuild_tensor_v2 #%%
faster_rcnn = FasterRCNNVGG16() trainer = FasterRCNNTrainer(faster_rcnn).cuda() trainer.load('./checkpoints/fasterrcnn_09031352_0')
opt.caffe_pretrain=True # this model was trained from caffe-pretrained model
# Plot examples on training set dataset = RSNADataset(opt.root_dir) for i in range(0, len(dataset)): sample = dataset[i] img = sample['image'] ori_img_ = inverse_normalize(at.tonumpy(img))
# plot predicti bboxes
_bboxes, _labels, _scores = trainer.faster_rcnn.predict([ori_img_], visualize=True)
pred_img = vis_bbox(ori_img_,
at.tonumpy(_bboxes[0]), at.tonumpy(_labels[0]).reshape(-1), at.tonumpy(_scores[0]))
[044] In an embodiment, at step 204 of the present disclosure, the one or more hardware processors 104 create, via the source code controller, a job using the obtained PL code. The source code controller (e.g., source control system as depicted in FIG. 2) automatically creates the job using the obtained PL code. Below is an illustrative job created using the obtained PL code:
#Sample Job Starts here
image: nvcr.io/nvidia/tensorflow:19.03-py3
train: script:
- pip install -r requirements.txt
- python demo.py --max_steps 12000 tags:
- One_16GB_GPU only:
- master
[045] The source code controller is configured to track of all the changes made in the PL code for addressing issue of traceability, at any point in time programmer can switch back to old code to fetch their parameters or algorithm.
[046] In an embodiment of the present disclosure, the created job may be a file written in ‘yaml’ format as described above and obtains following inputs:
1. Framework that is planned to use for training
2. Additional packages needed for the job
3. Nature of resources (GPU capacity)
4. Commands to start the job.
[047] At step 206 of the present disclosure, the one or more hardware processors 104 queue the created job for execution. At step 208 of the present disclosure, the one or more hardware processors 104 dynamically create a docker container based on configuration and environment details obtained from the user. The docker container is automatically and dynamically created by the system 100 based on the configuration and environment details obtained from the user. The
system 100 utilizes (or may utilize) or query docker registry (e.g., docker registry as depicted in FIG. 2) for dynamic and automatic creation of the docker container. The docker container enables creation of an isolated environment where job runs with required resources. In one embodiment, the docker container is representative file containing binary components (e.g., binary values), which gets created at run time. The configuration and environment details may include, but are not limited to, information on at least one of a type of queue job, an execution information pertaining to the dynamically created docker container, a set of scripts for executing the PL code, and the like. For instance, below pseudo code provide at least a few configuration and environment details and these details shall not be construed as limiting the scope of the present disclosure.
image: nvcr.io/nvidia/tensorflow:19.03-py3 # Container to be created
dynamically with code base and data
train: script: #Scripts to execute PL code
- pip install -r requirements.txt
- python demo.py --max_steps 12000 tags:
- One_16GB_GPU #Queue Name which tells the capacity of resource only:
- master #PL Code branch from version control system
[048] At step 210 of the present disclosure, the one or more hardware processors 104 execute the created job based on instructions comprised in the obtained PL code. The created job is executed by the system 100 using (i) the dynamically created docker container and (ii) training data comprised in the memory. In an embodiment, the created job is executed in a graphics processing unit (GPU) associated with a system (e.g., the system 100 or a server system). The created job is automatically set in queue for execution or executed, in one embodiment of the present disclosure.
[049] A custom wrapper is provided by the system 100 which is configured to: (i) create queues by logically grouping the resources, (ii) make user’s
data available on the docker container, (iii) initializing docker environment with requested framework, (iv) applying resource limit, (v) run the scripts/commands provided in yaml file, and the like.
[050] At step 212 of the present disclosure, the one or more hardware processors 104 capture, during the execution of the created job, one or more resource utilization metrics associated with the graphics processing unit (GPU) utilization and an output associated with the created job is obtained after completing the job execution. Examples of the one or more resource utilization metrics comprise, but are not limited to, at least one of (i) an amount of GPU memory consumption, (ii) amount of GPU processor consumption, (iii) an amount of storage drive being utilized, and (iv) central processing unit (CPU) and random access memory (RAM) being utilized. The system 100 may employ and implement a resource management tool to capture all the metrics for GPU, CPU, memory utilization for individual hardware component, in one embodiment of the present disclosure. FIG. 4, with reference to FIGS. 1 through 3, depicts a graphical representation illustrating the amount of GPU memory consumption and amount of GPU processor consumption, in accordance with an embodiment of the present disclosure. Below table (Table 1) depicts an illustrative examples of storage utilization and these examples shall not be construed as limiting the scope of the present disclosure:
Table 1
Project Name(s) Program(s) Storage Used
Group 1 Program-1 17.56GB
Group 1 Program-3 25.08GB
Group 3 Program-1 166.35MB
Group 1 Program-2 13.82GB
[051] FIG. 5, with reference to FIGS. 1 through 4, depicts a graphical representation illustrating central processing unit (CPU) being utilized, in accordance with an example embodiment of the present disclosure. More specifically, FIG. 5 depicts processor load average – last, minimum, average, maximum (e.g., 1-minute average per core, 5 minutes average per core, 15 minutes average per core) and trigger (e.g., processor load is too high on RNI-NVIDIA®-SVR [>5]). FIG. 6, with reference to FIGS. 1 through 5, depicts a graphical representation illustrating random access memory (RAM) being utilized, in accordance with an example embodiment of the present disclosure. More specifically, FIG. 5 depicts available memory (average – last, minimum, average, maximum) and trigger (e.g., lack of available memory on server on RNI-NVIDIA®-SVR [<20M]). During the created job execution, there could be intermediary training models generated (or intermediary output(s)) that help the system 100 to determine accuracy and level of training. In other words, one or more running logs associated with the created job being executed, are generated, wherein the one or more running logs are monitored. These running logs provide information on (i) amount of training completion by the machine learning model (or intermediary training models being generated or outputted by the system 100), and (ii) at least a partial output associated with training of the machine learning model. Below is an exemplary running log monitored and generated by the system 100 during the created job execution, and such running log example shall not be construed as limiting the scope of the present disclosure. More specifically, amount of training completion of the intermediary model(s) being outputted for generating a final ML model, is shown below:
Amount of training completed:
Train on 15529 samples, validate on 1564 samples Epoch 1/100
200/15529 [ ] - ETA: 1:38 - loss: 15.7365 - accuracy: 0.5050
400/15529 [ ] - ETA: 1:02 - loss: 17.0129 - accuracy: 0.5350
600/15529 [> ] - ETA: 49s - loss: 16.1993 - accuracy: 0.5400
800/15529 [> ] - ETA: 43s - loss: 15.6606 - accuracy: 0.5263
[052] FIG. 7, with reference to FIGS. 1 through 6, depicts a graphical representation illustrating partial output associated with training of the machine learning model, in accordance with an example embodiment of the present disclosure.
[053] At step 214 of the present disclosure, the one or more hardware processors 104 generate a machine learning (ML) model and an analytical report, based on the output associated with the created job, the output derived at the end of job execution being fully completed. In an embodiment, the analytical report comprises information pertaining to the one or more resource utilization metrics associated with the graphics processing unit (GPU) utilization. FIG. 8 depicts an exemplary graphical representation of the analytical report generated by the system
100 of FIG. 1, in accordance with an embodiment of the present disclosure. Analytical report can be produced at real time or post training based on the data generated as part of training. This report provides information on training performance over various iterations (loss at various epochs).
[054] As mentioned above, with ever increasing demand of high-end GPU compute job scheduling is one of the ways forward as these resources are too expensive to leave idle. At the same time end user of these compute are not hardcore programmers so approach to schedule the job should be as simple as possible. Embodiments of the present disclosure addresses the above technical problems of creating batch to schedule a job, traceability of the work done in past at any point in time and frequent changes of parameter to analyze their result by reducing the overhead of creating batches again and again.
[055] The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims.
[056] It is to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g. any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g. hardware means like e.g. an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g. an ASIC and an FPGA, or at least one microprocessor and at least one memory with software processing components located therein. Thus, the means can include both hardware
means and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software means. Alternatively, the embodiments may be implemented on different hardware devices, e.g. using a plurality of CPUs.
[057] The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various components described herein may be implemented in other components or combinations of other components. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
[058] The illustrated steps are set out to explain the exemplary
embodiments shown, and it should be anticipated that ongoing technological
development will change the manner in which particular functions are performed.
These examples are presented herein for purposes of illustration, and not limitation.
Further, the boundaries of the functional building blocks have been arbitrarily
defined herein for the convenience of the description. Alternative boundaries can
be defined so long as the specified functions and relationships thereof are
appropriately performed. Alternatives (including equivalents, extensions,
variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
[059] Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A
computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
[060] It is intended that the disclosure and examples be considered as exemplary only, with a true scope of disclosed embodiments being indicated by the following claims.
We Claim:
1. A processor implemented method, comprising:
obtaining, via a source code controller executed by one or more hardware processors, a programming language (PL) code from a user (202);
creating, via the source code controller executed by the one or more hardware processors, a job using the obtained PL code (204);
queueing, via the one or more hardware processors, the created job for execution (206);
dynamically creating, via the one or more hardware processors, a docker container based on configuration and environment details obtained from the user (208);
executing, via the one or more hardware processors, by using the dynamically created docker container and training data comprised in the memory, the created job based on instructions comprised in the obtained PL code, wherein the created job is executed in a graphics processing unit (GPU) of a server system (210);
capturing, via the one or more hardware processors, during execution of the created job, one or more resource utilization metrics associated with utilization of the graphics processing unit (GPU) and obtaining an output associated with the created job at the end of execution of the created job (212); and
generating, via the one or more hardware processors, a machine learning (ML) model and an analytical report, based on the output associated with the created job being executed, wherein the analytical report comprises information pertaining to the one or more resource utilization metrics associated with the utilization of the graphics processing unit (GPU) (214).
2. The processor implemented method of claim 1, wherein during the
execution of the created job, one or more running logs associated with the created
job are monitored, wherein the one or more running logs comprise (i) amount of
training completion by the machine learning model, and (ii) at least a partial output
associated with training of the machine learning model.
3. The processor implemented method of claim 1, wherein the configuration and environment details comprise information on at least one of a type of queue job, an execution information pertaining to the dynamically created docker container, and a set of scripts for executing the PL code.
4. The processor implemented method of claim 1, wherein the one or more resource utilization metrics comprise at least one of (i) an amount of GPU memory consumption, (ii) amount of GPU processor consumption, (iii) an amount of storage drive being utilized, and (iv) central processing unit (CPU) and random access memory (RAM) being utilized.
5. A system (100), comprising:
a memory (102) storing instructions and a source code controller; one or more communication interfaces (106); and
one or more hardware processors (104) coupled to the memory (102) via the one or more communication interfaces (106), wherein the one or more hardware processors (104) are configured by the instructions to:
obtain, via the source code controller comprised in the memory (102), a programming language (PL) code from a user;
create, via the source code controller, a job using the obtained PL code;
queue the created job for execution;
dynamically create a docker container based on configuration and environment details obtained from the user;
execute, by using the dynamically created docker container and training data comprised in the memory, the created job based on instructions comprised in the obtained PL code, wherein the created job is executed in a graphics processing unit (GPU) of a server system;
capture, during execution of the created job, one or more resource utilization metrics associated with the graphics processing unit (GPU)
utilization and obtaining an output associated with the created job at the end of execution of the created job; and
generate a machine learning (ML) model and an analytical report based on the output associated with the created job being executed, wherein the analytical report comprises information pertaining to the one or more resource utilization metrics associated with graphics processing unit (GPU) utilization.
6. The system as claimed in claim 5, wherein during the execution of the created job, one or more running logs associated with the created job are monitored, wherein the one or more running logs comprise (i) amount of training completion by the machine learning model, and (ii) at least a partial output associated with training of the machine learning model.
7. The system as claimed in claim 5, wherein the configuration and environment details comprise information on at least one of a type of queue job, an execution information pertaining to the dynamically created docker container, and a set of scripts for executing the PL code.
8. The system as claimed in claim 5, wherein the one or more resource utilization metrics comprise at least one of (i) an amount of GPU memory consumption, (ii) amount of GPU processor consumption, (iii) an amount of storage drive being utilized, and (iv) central processing unit (CPU) and random access memory (RAM) being utilized.
| # | Name | Date |
|---|---|---|
| 1 | 202021022475-COMPLETE SPECIFICATION [07-06-2022(online)].pdf | 2022-06-07 |
| 1 | 202021022475-IntimationOfGrant17-03-2025.pdf | 2025-03-17 |
| 1 | 202021022475-Response to office action [06-03-2025(online)].pdf | 2025-03-06 |
| 1 | 202021022475-STATEMENT OF UNDERTAKING (FORM 3) [28-05-2020(online)].pdf | 2020-05-28 |
| 1 | 202021022475-US(14)-HearingNotice-(HearingDate-29-01-2025).pdf | 2025-01-07 |
| 2 | 202021022475-Written submissions and relevant documents [12-02-2025(online)].pdf | 2025-02-12 |
| 2 | 202021022475-REQUEST FOR EXAMINATION (FORM-18) [28-05-2020(online)].pdf | 2020-05-28 |
| 2 | 202021022475-PatentCertificate17-03-2025.pdf | 2025-03-17 |
| 2 | 202021022475-FER_SER_REPLY [07-06-2022(online)].pdf | 2022-06-07 |
| 2 | 202021022475-COMPLETE SPECIFICATION [07-06-2022(online)].pdf | 2022-06-07 |
| 3 | 202021022475-Correspondence to notify the Controller [28-01-2025(online)].pdf | 2025-01-28 |
| 3 | 202021022475-FER.pdf | 2022-01-31 |
| 3 | 202021022475-FER_SER_REPLY [07-06-2022(online)].pdf | 2022-06-07 |
| 3 | 202021022475-FORM 18 [28-05-2020(online)].pdf | 2020-05-28 |
| 3 | 202021022475-Response to office action [06-03-2025(online)].pdf | 2025-03-06 |
| 4 | 202021022475-Correspondence to notify the Controller [24-01-2025(online)].pdf | 2025-01-24 |
| 4 | 202021022475-FER.pdf | 2022-01-31 |
| 4 | 202021022475-FORM 1 [28-05-2020(online)].pdf | 2020-05-28 |
| 4 | 202021022475-Written submissions and relevant documents [12-02-2025(online)].pdf | 2025-02-12 |
| 4 | Abstract1.jpg | 2021-10-19 |
| 5 | 202021022475-Correspondence to notify the Controller [28-01-2025(online)].pdf | 2025-01-28 |
| 5 | 202021022475-FIGURE OF ABSTRACT [28-05-2020(online)].jpg | 2020-05-28 |
| 5 | 202021022475-FORM-26 [24-01-2025(online)].pdf | 2025-01-24 |
| 5 | Abstract1.jpg | 2021-10-19 |
| 5 | 202021022475-Proof of Right [05-11-2020(online)].pdf | 2020-11-05 |
| 6 | 202021022475-US(14)-HearingNotice-(HearingDate-29-01-2025).pdf | 2025-01-07 |
| 6 | 202021022475-Proof of Right [05-11-2020(online)].pdf | 2020-11-05 |
| 6 | 202021022475-FORM-26 [20-10-2020(online)].pdf | 2020-10-20 |
| 6 | 202021022475-DRAWINGS [28-05-2020(online)].pdf | 2020-05-28 |
| 6 | 202021022475-Correspondence to notify the Controller [24-01-2025(online)].pdf | 2025-01-24 |
| 7 | 202021022475-FORM-26 [24-01-2025(online)].pdf | 2025-01-24 |
| 7 | 202021022475-COMPLETE SPECIFICATION [07-06-2022(online)].pdf | 2022-06-07 |
| 7 | 202021022475-COMPLETE SPECIFICATION [28-05-2020(online)].pdf | 2020-05-28 |
| 7 | 202021022475-DECLARATION OF INVENTORSHIP (FORM 5) [28-05-2020(online)].pdf | 2020-05-28 |
| 7 | 202021022475-FORM-26 [20-10-2020(online)].pdf | 2020-10-20 |
| 8 | 202021022475-COMPLETE SPECIFICATION [28-05-2020(online)].pdf | 2020-05-28 |
| 8 | 202021022475-DECLARATION OF INVENTORSHIP (FORM 5) [28-05-2020(online)].pdf | 2020-05-28 |
| 8 | 202021022475-FER_SER_REPLY [07-06-2022(online)].pdf | 2022-06-07 |
| 8 | 202021022475-US(14)-HearingNotice-(HearingDate-29-01-2025).pdf | 2025-01-07 |
| 9 | 202021022475-COMPLETE SPECIFICATION [07-06-2022(online)].pdf | 2022-06-07 |
| 9 | 202021022475-DECLARATION OF INVENTORSHIP (FORM 5) [28-05-2020(online)].pdf | 2020-05-28 |
| 9 | 202021022475-DRAWINGS [28-05-2020(online)].pdf | 2020-05-28 |
| 9 | 202021022475-FER.pdf | 2022-01-31 |
| 9 | 202021022475-FORM-26 [20-10-2020(online)].pdf | 2020-10-20 |
| 10 | 202021022475-DRAWINGS [28-05-2020(online)].pdf | 2020-05-28 |
| 10 | 202021022475-FER_SER_REPLY [07-06-2022(online)].pdf | 2022-06-07 |
| 10 | Abstract1.jpg | 2021-10-19 |
| 10 | 202021022475-Proof of Right [05-11-2020(online)].pdf | 2020-11-05 |
| 10 | 202021022475-FIGURE OF ABSTRACT [28-05-2020(online)].jpg | 2020-05-28 |
| 11 | 202021022475-FER.pdf | 2022-01-31 |
| 11 | 202021022475-FIGURE OF ABSTRACT [28-05-2020(online)].jpg | 2020-05-28 |
| 11 | 202021022475-FORM 1 [28-05-2020(online)].pdf | 2020-05-28 |
| 11 | 202021022475-Proof of Right [05-11-2020(online)].pdf | 2020-11-05 |
| 11 | Abstract1.jpg | 2021-10-19 |
| 12 | 202021022475-FER.pdf | 2022-01-31 |
| 12 | 202021022475-FORM 1 [28-05-2020(online)].pdf | 2020-05-28 |
| 12 | 202021022475-FORM 18 [28-05-2020(online)].pdf | 2020-05-28 |
| 12 | 202021022475-FORM-26 [20-10-2020(online)].pdf | 2020-10-20 |
| 12 | Abstract1.jpg | 2021-10-19 |
| 13 | 202021022475-REQUEST FOR EXAMINATION (FORM-18) [28-05-2020(online)].pdf | 2020-05-28 |
| 13 | 202021022475-Proof of Right [05-11-2020(online)].pdf | 2020-11-05 |
| 13 | 202021022475-FORM 18 [28-05-2020(online)].pdf | 2020-05-28 |
| 13 | 202021022475-FER_SER_REPLY [07-06-2022(online)].pdf | 2022-06-07 |
| 13 | 202021022475-COMPLETE SPECIFICATION [28-05-2020(online)].pdf | 2020-05-28 |
| 14 | 202021022475-COMPLETE SPECIFICATION [07-06-2022(online)].pdf | 2022-06-07 |
| 14 | 202021022475-DECLARATION OF INVENTORSHIP (FORM 5) [28-05-2020(online)].pdf | 2020-05-28 |
| 14 | 202021022475-FORM-26 [20-10-2020(online)].pdf | 2020-10-20 |
| 14 | 202021022475-REQUEST FOR EXAMINATION (FORM-18) [28-05-2020(online)].pdf | 2020-05-28 |
| 14 | 202021022475-STATEMENT OF UNDERTAKING (FORM 3) [28-05-2020(online)].pdf | 2020-05-28 |
| 15 | 202021022475-COMPLETE SPECIFICATION [28-05-2020(online)].pdf | 2020-05-28 |
| 15 | 202021022475-DRAWINGS [28-05-2020(online)].pdf | 2020-05-28 |
| 15 | 202021022475-STATEMENT OF UNDERTAKING (FORM 3) [28-05-2020(online)].pdf | 2020-05-28 |
| 15 | 202021022475-US(14)-HearingNotice-(HearingDate-29-01-2025).pdf | 2025-01-07 |
| 16 | 202021022475-DECLARATION OF INVENTORSHIP (FORM 5) [28-05-2020(online)].pdf | 2020-05-28 |
| 16 | 202021022475-FIGURE OF ABSTRACT [28-05-2020(online)].jpg | 2020-05-28 |
| 16 | 202021022475-FORM-26 [24-01-2025(online)].pdf | 2025-01-24 |
| 17 | 202021022475-Correspondence to notify the Controller [24-01-2025(online)].pdf | 2025-01-24 |
| 17 | 202021022475-DRAWINGS [28-05-2020(online)].pdf | 2020-05-28 |
| 17 | 202021022475-FORM 1 [28-05-2020(online)].pdf | 2020-05-28 |
| 18 | 202021022475-FORM 18 [28-05-2020(online)].pdf | 2020-05-28 |
| 18 | 202021022475-FIGURE OF ABSTRACT [28-05-2020(online)].jpg | 2020-05-28 |
| 18 | 202021022475-Correspondence to notify the Controller [28-01-2025(online)].pdf | 2025-01-28 |
| 19 | 202021022475-FORM 1 [28-05-2020(online)].pdf | 2020-05-28 |
| 19 | 202021022475-REQUEST FOR EXAMINATION (FORM-18) [28-05-2020(online)].pdf | 2020-05-28 |
| 19 | 202021022475-Written submissions and relevant documents [12-02-2025(online)].pdf | 2025-02-12 |
| 20 | 202021022475-STATEMENT OF UNDERTAKING (FORM 3) [28-05-2020(online)].pdf | 2020-05-28 |
| 20 | 202021022475-Response to office action [06-03-2025(online)].pdf | 2025-03-06 |
| 20 | 202021022475-FORM 18 [28-05-2020(online)].pdf | 2020-05-28 |
| 21 | 202021022475-REQUEST FOR EXAMINATION (FORM-18) [28-05-2020(online)].pdf | 2020-05-28 |
| 21 | 202021022475-PatentCertificate17-03-2025.pdf | 2025-03-17 |
| 22 | 202021022475-STATEMENT OF UNDERTAKING (FORM 3) [28-05-2020(online)].pdf | 2020-05-28 |
| 22 | 202021022475-IntimationOfGrant17-03-2025.pdf | 2025-03-17 |
| 1 | SearchHistory(18)E_13-01-2022.pdf |