Abstract: ABSTRACT SYSTEM AND METHOD FOR EXECUTING AT LEAST ONE TASK ON A DISTRIBUTED COMPUTING CLUSTER The present disclosure relates to a system (104) and a method (600) for executing at least one task on a distributed computing cluster. The system (104) includes a transceiver unit (312). The transceiver unit (312) is configured to receive one or more tasks created by a user. The system (104) includes a deployment unit (314). The deployment unit (314) is configured to deploy the one or more tasks on the distributed computing cluster. The system (104) includes a resource allocation unit (316). The resource allocation unit (316) is configured to dynamically allocate a plurality of resources for the one or more tasks deployed on the distributed computing cluster. The system (104) further includes an execution unit (318). The execution unit (318) is configured to execute the one or more tasks in parallel across the distributed computing cluster utilizing the allocated plurality of resources. Ref. Fig. 3
DESC: FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
SYSTEM AND METHOD FOR EXECUTING AT LEAST ONE TASK ON A DISTRIBUTED COMPUTING CLUSTER
2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA
3.PREAMBLE TO THE DESCRIPTION
THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE NATURE OF THIS INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.
FIELD OF THE INVENTION
[0001] The present invention generally relates to data processing, and more particularly relates to a system and method for executing tasks on a distributed computing cluster.
BACKGROUND OF THE INVENTION
[0002] In computing systems, time and resources must be managed efficiently while executing a particular task. Traditionally, users had to manually write specific code and ensure its deployment onto the production system whenever a new requirement arose. This process required continuous effort to accommodate changing requirements and resulted in delays and inefficiencies. In conventional systems, if a user requests to execute a task, then a related code is written and deployed onto the production. Thus, for every task this process is followed, which is time consuming and it also requires additional technical background about the code. For example, when a user requests to procure information about the call success rating, the program code is written and deployed into a production server to search for the information requested. Further, if the user requests to perform another task, then again another code is written and deployed onto the production server. Thus, for every task some programming is carried out. This process increases the processing time of the computing system and also it is an effort-intensive process of writing code and deploying it on a production environment for every new job or task that needs to be executed on a distributed computing cluster.
[0003] Therefore, there is a need for an efficient computing system that assists in providing a dynamic task creation and deployment platform
SUMMARY OF THE INVENTION
[0004] One or more embodiments of the present disclosure provide a system and method for executing at least one task on a distributed computing cluster.
[0005] In one aspect of the present invention, a system for executing at least one task on a distributed computing cluster is disclosed. The system includes a transceiver unit configured to receive one or more tasks created by a user. The system includes a deployment unit configured to deploy the one or more tasks on the distributed computing cluster. The system includes a resource allocation unit configured to dynamically allocate a plurality of resources for the one or more tasks deployed on the distributed computing cluster. The system includes an execution unit configured to execute the one or more tasks in parallel across the distributed computing cluster utilizing the allocated plurality of resources.
[0006] In one embodiment, the distributed computing cluster includes a plurality of applications.
[0007] In one embodiment, an interface unit of the system allows the user to manage and monitor the one or more tasks.
[0008] In one embodiment, the interface unit is further configured to allow the user to create the one or more tasks by providing the user with one or more parameters.
[0009] In one embodiment, the one or more parameters includes defining one or more task requirements and applying filters for selections of one or more logics.
[0010] In one embodiment, the execution unit is configured to execute the one or more tasks in parallel across the distributed computing cluster utilizing the allocated plurality of resources by dividing the one or more tasks into a plurality of sub-tasks, executing the plurality of sub-tasks parallel utilizing a plurality of servers and the results pertaining to the one or more tasks and associated the plurality of sub-tasks are aggregated and stored in a database. In one embodiment, the results are visually presented to the user on a User Equipment (UE) for further analysis.
[0011] In one embodiment, the plurality of resources includes, at least one of, one or more files, a memory, a Central Processing Unit (CPU) core, and a bandwidth.
[0012] In another aspect of the present invention, a method for executing at least one task on a distributed computing cluster is disclosed. The method includes the steps of receiving, by one or more processors, one or more tasks created by a user. The method includes the step of deploying, by the one or more processors, the one or more tasks on the distributed computing cluster. The method includes the steps of dynamically allocating, by the one or more processors, a plurality of resources for the one or more tasks deployed on the distributed computing cluster. The method includes the steps of executing, by the one or more processors, the one or more tasks in parallel across the distributed computing cluster utilizing the allocated plurality of resources.
[0013] In another aspect of the invention, a non-transitory computer-readable medium having stored thereon computer-readable instructions is disclosed. The computer-readable instructions are executed by a processor. The processor is configured to receive, one or more tasks created by a user. The processor is further configured to deploy, the one or more tasks on the distributed computing cluster. The processor is further configure to dynamically allocate, a plurality of resources for the one or more tasks deployed on the distributed computing cluster. The processor is further configured to execute, the one or more tasks in parallel across the distributed computing cluster utilizing the allocated plurality of resources.
[0014] In another aspect of invention, User Equipment (UE) is disclosed. The UE includes one or more primary processors communicatively coupled to one or more processors, the one or more primary processors coupled with a memory. The processor is configured to creating, one or more tasks based on input received from a user. The processor is configured to displaying, results pertaining to one or more tasks and associated plurality of sub-tasks to a user.
[0015] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0016] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0017] FIG. 1 is an exemplary block diagram of an environment for executing at least one task on a distributed computing cluster, according to various embodiments of the present invention;
[0018] FIG. 2 is an exemplary architecture of the system for executing at least one task on the distributed computing cluster, according to various embodiments of the present invention;
[0019] FIG. 3 is a block diagram of the system for executing at least one task on the distributed computing cluster, according to various embodiments of the present invention;
[0020] FIG. 4 is schematic representation of a workflow of the system of FIG. 3, according to various embodiments of the present invention;
[0021] FIG. 5 is a signal flow diagram for executing at least one task on the distributed computing cluster, according to various embodiments of the present invention; and
[0022] FIG. 6 shows a flow diagram of a method for executing at least one task on the distributed computing cluster, according to various embodiments of the present invention.
[0023] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0024] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0025] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0026] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0027] As per various embodiments depicted, the present invention discloses the system and method to enable the dynamic job creation and deployment on the distributed computing cluster without the need for coding. The present invention allows the users to define job requirements without writing code and the deployment and execution is automated. This method simplifies the process, enhances efficiency, and revolutionizes the execution of tasks on distributed clusters. It further eliminates technical barriers, streamlines job execution, and makes it more accessible and adaptable to changing requirements. This method is performed at the application server i.e., at micro-service level.
[0028] In an embodiment of the present disclosure, the system provides a user interface (UI) to the user which enables the user to provide information related to a job to be completed and apply some logic accordingly. A job is created by the user through the UI for submitting a request to the application and this in turn will process that request onto that distributed system. The job that the user has requested is divided in several parts and it is processed on multiple different servers. Thus, parallel computation is happening so that the request gets served quickly instead of one server doing the entire job/task. The resources are distributed to ensure the job is completed quickly.
[0029] Referring to FIG. 1, FIG. 1 illustrates an exemplary block diagram of an environment 100 for executing at least one task on a distributed computing cluster, according to various embodiments of the present invention. The environment 100 includes at least one User Equipment (UE) 101 configured to at least transmit a request for executing at least one task on the distributed computing cluster. In one embodiment, the request includes creating one or more tasks based on an input received from the UE 101. In one embodiment, the UE 101 includes at least one of, but not limited to, a first UE 101a, a second UE 101b, and a third UE 101c. In one embodiment, each of at least first UE 101a, the second UE 101b, and the third UE 101c are configured to creating one or more task for executing at least one task on the distributed computing cluster.
[0030] In one embodiment, the first UE 101a, the second UE 101b and the third UE 101c are communicatively connected to a system 104 via the network 102. The first UE 101a, the second UE 101b and the third UE 101c will henceforth collectively and individually be referred to as “the UE 101” without limiting the scope and deviating from the scope of the present disclosure.
[0031] More information regarding the same will be provided with reference to the following figures.
[0032] In one embodiment, the UE 101 includes, but are not limited to, a handheld wireless communication device (e.g., a mobile phone, a smart phone, a tablet device, and so on), a wearable computer device (e.g., a head-mounted display computer device, a head-mounted camera device, a wristwatch computer device, and so on), a Global Positioning System (GPS) device, a laptop computer, a tablet computer, or another type of portable computer, a media playing device, a portable gaming system, and/or any other type of computer device with wireless communication capabilities, and the like.
[0033] The environment 100 further includes the server 103 communicably coupled to the UE 101 via the network 102. In one embodiment, the servers 103 may also include multiple servers 103. The server 103 includes by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an embodiment, the entity may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise, a defence facility, or any other facility that provides content.
[0034] The network 102 includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The network 102 may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
[0035] Further, the network 102 also includes, by the way of example but not limitation, one or more wireless interfaces/protocols such as, for example, 802.11 (Wi-Fi), 802.15 (including Bluetooth™), 802.16 (Wi-Max), 802.22, Cellular standards such as CDMA, CDMA2000, WCDMA, Radio Frequency (e.g., RFID), Infrared, laser, Near Field Magnetics, etc.
[0036] The environment 100 further includes the system 104 communicably coupled to the server 103 and the UE 101 via the network 102. The system 104 is configured to execute at least one task on the distributed computing cluster. Further, the system 104 is adapted to be embedded within the server 103 or is embedded as the individual entity independent of the server 103. However, for the purpose of description, the system 104 is described as an integral part of the server 103, without deviating from the scope of the present disclosure.
[0037] Operational and construction features of the system 104 will be explained in detail with respect to the following figures.
[0038] Referring to FIG. 2, FIG. 2 is an exemplary architecture of the system 104 for executing at least one task on a distributed computing cluster, according to various embodiments of the present invention. The architecture of the system 104 includes a User Interface (UI) unit 202, a distributed file system 204, a job template creation unit 206, a computation master cluster 208, a data lake 210, a job deployment unit 212 and a job processing unit 214.
[0039] In one embodiment, the system 104 is configured to simplify the execution and scheduling of tasks on the distributed computing cluster based on an input received from the UE 101 via the UI 202. In one embodiment, the distributed computing cluster refers to the computation master cluster 208. The computation master cluster 208 includes a plurality of server applications that helps in compilation and deployment of jobs/tasks for execution. The UI 202 allows the user to select one or more parameters to create the one or more tasks. The one or more parameters includes defining one or more task requirements and applying filters for selections of one or more logics. In one embodiment, the one or more task requirements may include, but not limited to for example, batch processing jobs, real-time data processing tasks, machine learning model training, etc. In one embodiment, the applied filters may include but not limited to determine the attributes or metadata to be used as filter criteria such as timestamps, categories, Identifiers (IDs), or any other relevant data fields. In one embodiment, the selections of one or more logics may include one or more conditions or rules that data or tasks must satisfy. For example, the timestamps within a certain range, a map-reduce approach such as filtering map is applied in parallel across nodes etc.
[0040] In one embodiment, the distributed file system 204 collects one or more parameters defining the one or more task requirements and applied filters for selections of one or more logics. The one or more task requirements and applied filters for selections of one or more logics are transmitted to the job template creation unit 206. The job template creation unit 206 facilitates the users to create and schedule the jobs/tasks through the UI 202 without writing a code. The job template creation unit 206 determines the specific requirements of the job, including resource needs (such as Central Processing Unit (CPU), memory, storage), environment variables, and any dependencies. Then, the job template creation unit 206 schedules the job suitable for the distributed cluster environment for example, a kubernetes with kubernetes job objects, Apache Mesos, Nomad, etc. Further, the job template or configuration file is specified by the job template creation unit 206. The job template includes but not limited to a metadata such as name, description, labels, etc, a job specification such as command or script to execute, resource limits and requests (CPU, memory), environment variables, retry policies, constraints or node selectors to specify where the job should run, timeout settings, etc. Accordingly, the job template creation unit 206 transmits the created and/or scheduled jobs/tasks received through the UI 202 to the job deployment unit 212.
[0041] Furthermore, the job deployment unit 212 automatically compiles and deploys the created and/or scheduled jobs/tasks received through the UI 202. The job deployment unit 212 systematically collects, organizes, and summarizes the data received from the job template creation unit 206 for compilation. Thereafter, the data is compiled by the job deployment unit 122. Further, the complied data is deployed for easy accessibility across the network 102 for plurality of applications and users. In one embodiment, the plurality of applications includes a plurality of server applications. The computation master cluster 208 includes a plurality of server applications that helps in compilation and deployment of the jobs/task for execution. The compiled and deployed data of the jobs/tasks data is executed with the help of job processing unit 214. In one embodiment, the job processing unit 214 is communicably connected to the computation master cluster 208.
[0042] Further, the job processing unit 214 dynamically allocates resources and executes the jobs/tasks in parallel across a plurality of server applications present in the distributed master cluster 208. Further, the jobs/tasks are divided into several sub-tasks and deployed into a distributed system. In one embodiment, the distributed system is referred to as the distributed master cluster 208. For example, one job/task is divided into several parts and distributed to plurality of server applications for parallel execution in the distributed master cluster 208. In one embodiment, the distributed master cluster 208 includes the plurality of server applications. Thus, the parallel computation is executed to escalate the execution process. Results of the executed parallel computation of the job/task are aggregated and stored in the data lake 210. In one embodiment, the aggregation includes the results of the executed parallel computation of the job/task are combined together into a single summary. In one embodiment, the data lake 210 is coupled to the database 320 (shown in FIG. 3).
[0043] Thus, the system 104 enables the users to monitor and manage the jobs/tasks in the distributed cluster. In this regard, advantageously, the system 104 provides a streamlined and efficient approach to the distributed data processing without the need for extensive coding or manual deployment. The aggregated result is sent back to the user. Thus, the time taken for carrying out the task is reduced because of the use of the parallel computation.
[0044] In one embodiment, the job deployment unit 212 and the job processing unit 214 are communicably coupled to the computation master cluster 208. The computation master cluster 208 includes the plurality of server applications. The computation master cluster 208 is responsible for managing and coordinating various computational tasks and applications across the distributed computing cluster. In one embodiment, each server application within the computation master cluster 208 requires specific resources to perform the associated tasks effectively. The resources include but not limited to, at least one of, one or more files, a memory, a Central Processing Unit (CPU) core, and a bandwidth. In one embodiment, the one or more file includes, for example a configuration file related to the server application or data. The computation master cluster 208 manages the allocation of the resources based on the requirements and priorities of each server application.
[0045] Referring to FIG. 3, FIG. 3 illustrates a block diagram of the system 104 for executing at least one task on the distributed computing cluster, according to various embodiments of the present invention. The system 104 includes the processor 302, a memory 304, the user interface (UI) unit 202, a display unit 308, an input unit 310 and a database 320. The one or more processors 302, hereinafter referred to as the processor 302 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions. As per the illustrated embodiment, the system 104 includes one processor 302. However, it is to be noted that the system 104 include multiple processors as per the requirement and without deviating from the scope of the present disclosure. Among other capabilities, the processor 302 is configured to fetch and execute computer-readable instructions stored in the memory 304.
[0046] The memory 304 is configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory 304 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as EPROM, flash memory, and the like. In an embodiment, the UI unit 202 includes a variety of interfaces, for example, interfaces for data input and output devices, referred to as input/output devices, storage devices, and the like. The UI unit 202 facilitates communication of the system 104. In one embodiment, the UI unit 202 provides a communication pathway for one or more components of the system 104.
[0047] The UI unit 202 may include functionality similar to at least a portion of functionality implemented by one or more computer system interfaces such as those described herein and/or generally known to one having ordinary skill in the art. The UI unit 202 may be rendered on the display unit 308, implemented using LCD display technology, OLED display technology, and/or other types of conventional display technology. The display unit 308 is integrated within the system 104 or connected externally. Further the request may be configured to receive requests, queries, or information from the user by using the input unit 310. The input unit 310 may include, but not limited to, keyboard, buttons, scroll wheels, cursors, touchscreen sensors, audio command interfaces, magnetic strip reader, optical scanner, etc.
[0048] The system 104, may further comprise the database 320. The database 320 may be communicably connected to the processor 302, and the memory 304. The database 320 is configured to store and retrieve the data of the UE 101.
[0049] Further, the processor 302, in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 302. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 302 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for processor 302 may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 304 may store instructions that, when executed by the processing resource, implement the processor 302. In such examples, the system 104 may comprise the memory 304 storing the instructions and the processing resource to execute the instructions, or the memory 304 may be separate but accessible to the system 104 and the processing resource. In other examples, the processor 302 may be implemented by electronic circuitry.
[0050] In order for the system 104 to execute at least one task on the distributed computing cluster. The processor 302 includes a transceiver unit 312, a deployment unit 314, a resource allocation unit 316 and an execution unit 318, wherein all the units are communicably coupled to each other. In an embodiment, the transceiver unit 312, the deployment unit 314, the resource allocation unit 316 and the execution unit 318 are enabled by the processor 302 to execute at least one task on the distributed computing cluster.
[0051] The transceiver unit 312 of the processor 302 is communicably connected to the UE 101 via the network 102. Accordingly, the transceiver unit 312 is configured to receive one or more tasks created by a user via the UI unit 202. The UI unit 202 of the system 104, allows the user to manage and monitor the one or more tasks. In one embodiment, the UI unit 202 is further configured to allow the user to create the one or more tasks by providing the user to select the one or more parameters. In one embodiment, the one or more parameters includes defining one or more task requirements and applying filters for selections of one or more logics. In one embodiment, the one or more task requirements may include, for example, but not limited to batch processing jobs, real-time data processing tasks, machine learning model training, etc. In one embodiment, the applied filters may include to determine the attributes or metadata to be used as filter criteria such as timestamps, categories, Identifiers (IDs), or any other relevant data fields. In one embodiment, the selections of one or more logics may include one or more conditions or rules that data or tasks must satisfy. For example, the timestamps within a certain range, a map-reduce approach such as filtering map is applied in parallel across nodes etc.
[0052] In one embodiment, the one or more tasks includes the job/task template. The job/task template is created based on defining one or more task requirements and applied filters for selections of one or more logics. The job/task template determines the specific requirements of the job, including resource needs (CPU, memory, storage), environment variables, and any dependencies. The job template includes but not limited to the metadata such as name, description, labels, etc., the job specification such as command or script to execute, resource limits and requests (CPU, memory), environment variables, volume mounts, retry policies, constraints or node selectors to specify where the job should run, timeout settings, etc. Then, the job/task template is further scheduled for the plurality of server applications in the distributed cluster environment. The distributed cluster environment includes the plurality of server applications. Furthermore, the transceiver unit 312 is configured to transmit the scheduled job/task template for the distributed cluster environment to the deployment unit 314.
[0053] In one embodiment, on receipt of the scheduled job/task template for the distributed cluster environment, the deployment unit 314 is configured to deploy the scheduled job/task on the distributed computing cluster. In one embodiment, the distributed computing cluster is the computation master cluster 208 of FIG. 2. The computation master cluster 208 includes the plurality of server applications. The plurality of server applications compiles and deploys the scheduled job/task on the distributed computing cluster. The deployment unit 314 systematically collects, organize, and summarize the data received in the job template for compilation. Further, the compilation of the job\task template is deployed for easily accessible across the network 102 for plurality of server applications and users. The computation master cluster 208 includes the plurality of server applications that helps in compilation and deployment of the data for executing the scheduled jobs/tasks into the distributed computing cluster. Further, the deployed jobs/tasks into the distributed computing cluster are transmitted to the resource allocation unit 316.
[0054] Furthermore, on receipt of the deployed jobs/tasks into the distributed computing cluster, the resource allocation unit 316 is configured to dynamically allocate a plurality of resources for the one or more tasks deployed on the distributed computing cluster. The plurality of resources includes but not limited to at least one of, one or more files, a memory, a Central Processing Unit (CPU) core, and a bandwidth. In one embodiment, the one or more file includes a configuration file related to the server application or data. Accordingly, the dynamically allocated plurality of resources for the deployed jobs/tasks into the distributed computing cluster 100 are transmitted to the execution unit 318.
[0055] On receipt of the dynamically allocated plurality of resources for the deployed jobs/tasks into the distributed computing cluster, the execution unit 318 is configured to divide the one or more tasks into a plurality of sub-tasks. The plurality of sub-tasks is distributed to the plurality of server applications of the distributed computing cluster for executing parallel computation. After executing the plurality of sub-tasks parallelly, the results pertaining to the one or more tasks and associated plurality of sub-tasks are aggregated and stored in the database 320. In one embodiment, the results are visually presented to the user on the UE 101 for further analysis.
[0056] In one embodiment, as explained above, the results are stored pertaining to the aggregated one or more tasks and associated the plurality of sub-tasks in the database 320. Due to which, the usage of additional memory in the database 320 is eliminated. Advantageously, the system 104 enables the users to quickly create new jobs or modify existing ones that increases the data processing speed and simplifies the process, enhances efficiency, and revolutionizes the execution of tasks on distributed clusters.
[0057] Referring to FIG. 4, FIG. 4 is schematic representation of a workflow of the system of FIG. 3, according to various embodiments of the present invention. It is to be noted that the embodiment with respect to FIG. 4 will be explained with respect to the first UE 101a for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure.
[0058] As mentioned earlier in FIG. 1, each of the first UE 101a, the second UE 101b, and the third UE 101c may include an external storage device, a bus, a main memory, a read-only memory, a mass storage device, communication port(s), and a processor. The exemplary embodiment as illustrated in the FIG. 4 will be explained with respect to the first UE 101a. The first UE 101a includes one or more primary processors 402 communicably coupled to the one or more processors 302 of the system 104. The one or more primary processors 402 are coupled with a memory unit 404 storing instructions which are executed by the one or more primary processors 402. Execution of the stored instructions by the one or more primary processors 402 enables the first UE 101a to create one or more tasks based on the request received from the first UE 101a. The request is further transmitted to the processor 302 and the results pertaining to the one or more tasks and associated plurality of sub-tasks are displayed to the user of the first UE 101a.
[0059] As mentioned earlier, the one or more processors 302 is configured to transmit the request to the first UE 101a. More specifically, the one or more processors 302 of the system 104 is configured to create one or more tasks based on the input received from the first UE 101a. Further, the first UE 101a transmit the request to the transceiver unit 312 of the system 104 for executing at least one task on the distributed computing cluster.
[0060] In the preferred embodiment, the transceiver unit 312 is configured to receive one or more tasks created by the user.
[0061] As per the illustrated embodiment, the system 104 includes the one or more processors 302, the memory 304, the UI unit 202, the display unit 308, the input unit 310 and the database 320. The operations and functions of the one or more processors 302, the memory 304, the UI unit 202, the display unit 308, the input unit 310 and the database 320, are already explained in FIG. 3. For the sake of brevity, a similar description related to the working and operation of the system 104 as illustrated in FIG. 4 has been omitted to avoid repetition.
[0062] Further, the processor 302 includes the transceiver unit 312, the deployment unit 314, the resource allocation unit 316 and the execution unit 318. The operations and functions of the transceiver unit 312, the deployment unit 314, the resource allocation unit 316 and the execution unit 318 are already explained in FIG. 3. Hence, for the sake of brevity, it is to be noted that a similar description related to the working and operation of the system 104 as illustrated in FIG. 4 has been omitted to avoid repetition. The limited description provided for the system 104 in FIG. 4, should be read with the description as provided for the system 104 in the FIG. 3 above, and should not be construed as limiting the scope of the present disclosure.
[0063] Referring to FIG. 5, FIG. 5 is an exemplary signal flow diagram for executing at least one task on the distributed computing cluster, according to one or more embodiments of the present invention. For the purpose of description, the signal flow diagram is described with the embodiments as illustrated in FIG. 3 and should nowhere be construed as limiting the scope of the present disclosure.
[0064] At step 502, the user creates one or more tasks via the UI 202. The UI 202 allows the user to manage and monitor the one or more tasks. The UI 202 is configured to allow the user to select the one or more parameters to create the one or more tasks.
[0065] At step 504, upon creating one or more tasks via the UI 202, the one or more tasks on the distributed computing cluster are deployed. In one embodiment, the distributed computing cluster includes the plurality of server applications.
[0066] At step 506, upon deploying the one or more tasks on the distributed computing cluster, the plurality of the resources is allocated dynamically to the one or more tasks. In one embodiment, the plurality of resources includes but not limited to one of one or more files, the memory, the Central Processing Unit (CPU) core and the bandwidth.
[0067] At step 508, upon allocation of the plurality of the resources to the one or more task, the one or more tasks are divided into the plurality of sub-tasks. The plurality of sub-tasks is distributed to the plurality of server applications in the distributed computing cluster for executing parallel computation. After executing the plurality of sub-tasks parallelly, the results pertaining to the one or more tasks and associated plurality of sub-tasks are aggregated and stored in the database 320. In one embodiment, the results are visually presented to the user on the UE 101 for further analysis.
[0068] Referring to FIG. 6, FIG. 6 illustrates a flow diagram of the method 600 for executing at least one task on a distributed computing cluster, according to various embodiments of the present invention. The method 600 is adapted for executing at least one task on the distributed computing cluster. For the purpose of description, the method 600 is described with the embodiments as illustrated in FIG. 3 and should nowhere be construed as limiting the scope of the present disclosure.
[0069] At step 601, the method 600 includes the step of receiving one or more tasks created by the user by the transceiver unit 312. In one embodiment, the transceiver unit 312 is configured to receive one or more tasks created by the user via the UI unit 202. The UI unit 202 of the system 104 allows the user to manage and monitor the one or more tasks. In one embodiment, the UI unit 202 is further configured to allow the user to create the one or more tasks by providing the user with one or more parameters. The one or more parameters includes defining one or more task requirements and applying filters for selections of one or more logics. Furthermore, the transceiver unit 312 is configured to transmit one or more tasks created by the user to the deployment unit 314.
[0070] At step 602, the method 600 includes the step of deploying the one or more tasks on the distributed computing cluster by the deployment unit 314. In one embodiment, the distributed computing cluster includes the plurality of server applications. The deployed one or more tasks on the distributed computing cluster is transmitted to the resource allocation unit 316.
[0071] At step 603, the method 600 includes the step of dynamically allocating the plurality of resources for the one or more tasks deployed on the distributed computing cluster by the resource allocation unit 316. The plurality of resources includes but not limited to at least one of, one or more files, the memory, the Central Processing Unit (CPU) core and the bandwidth. The dynamically allocated plurality of resources for the one or more tasks deployed on the distributed computing cluster are transmitted to the execution unit 318.
[0072] At step 604, the method 600 includes the step of executing the one or more tasks in parallel across the distributed computing cluster utilizing the allocated plurality of resources. The execution unit 318 is configured to divide the one or more tasks into the plurality of sub-tasks. The plurality of sub-tasks is distributed to the plurality of server applications in the distributed computing cluster for executing parallel computation. After executing the plurality of sub-tasks parallelly, the results pertaining to the one or more tasks and associated the plurality of sub-tasks are aggregated and stored in the database 320. In one embodiment, the results are visually presented to the user on the UE 101 for further analysis.
[0073] The present invention further discloses a non-transitory computer-readable medium having stored thereon computer-readable instructions. The computer-readable instructions are executed by a processor 302. The processor 302 is configured to receive one or more tasks created by a user. The processor 302 is configured to deploy the one or more tasks on the distributed computing cluster. The processor 302 is further configured to dynamically allocate the plurality of resources for the one or more tasks deployed on the distributed computing cluster. The processor 302 is configured to execute the one or more tasks in parallel across the distributed computing cluster utilizing the allocated plurality of resources.
[0074] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIG.1-6) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0075] The present disclosure incorporates technical advancement for executing at least one task on the distributed computing cluster by reducing manual effort, enhances efficiency offers flexibility and scalability to meet job requirements. The invention takes advantage of an automated job deployment and execution on the distributed computing cluster, which enhances overall efficiency in distributed data processing and saving significant time. Further, users can create jobs or tasks without the need for any programming knowledge. This simplifies the process and makes it accessible to a wider range of users, regardless of their technical background.
[0076] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.
REFERENCE NUMERALS
[0077] Environment - 100
[0078] User Equipment - 101
[0079] Network - 102
[0080] Server - 103
[0081] System - 104
[0082] User interface unit - 202
[0083] Distributed file system - 204
[0084] Job template creation unit - 206
[0085] Computation master cluster - 208
[0086] Data lake - 210
[0087] Job deployment unit - 212
[0088] Job processing unit - 214
[0089] Processor - 302
[0090] Memory - 304
[0091] Display unit - 308
[0092] Input unit - 310
[0093] Transceiver unit - 312
[0094] Deployment unit - 314
[0095] Resource allocation unit - 316
[0096] Execution unit - 318
[0097] Database - 320
[0098] Primary processor - 402
[0099] Memory unit - 404
,CLAIMS:CLAIMS
We Claim:
1. A method (600) for executing at least one task on a distributed computing cluster, the method (600) comprising the steps of:
receiving, by one or more processors (302), one or more tasks created by a user;
deploying, by the one or more processors (302), the one or more tasks on the distributed computing cluster;
dynamically allocating, by the one or more processors (302), a plurality of resources for the one or more tasks deployed on the distributed computing cluster; and
executing, by the one or more processors (302), the one or more tasks in parallel across the distributed computing cluster utilizing the allocated plurality of resources.
2. The method (600) as claimed in claim 1, wherein the distributed computing cluster includes a plurality of applications.
3. The method (600) as claimed in claim 1, wherein the one or more processors (302) allows the user to manage and monitor the one or more tasks.
4. The method (600) as claimed in claim 1, wherein the one or more processors (302) allows the user to create the one or more tasks by providing the user with one or more parameters, thereby allowing the user to select the one or more parameters to create the one or more tasks.
5. The method (600) as claimed in claim 4, wherein the one or more parameters includes, defining one or more task requirements and applying filters for selections of one or more logics.
6. The method (600) as claimed in claim 1, wherein the step of, executing, the one or more tasks in parallel across the distributed computing cluster utilizing the allocated plurality of resources, includes the steps of:
dividing, by the one or more processors (302), the one or more tasks into a plurality of sub-tasks;
executing, by the one or more processors (302), the plurality of sub-tasks parallelly utilizing a plurality of servers; and
aggregating and storing, by the one or more processors (302), in a database (320), the results pertaining to the one or more tasks and associated the plurality of sub-tasks.
7. The method (600) as claimed in claim 6, wherein the results are visually presented to a user on a display device for further analysis.
8. The method (600) as claimed in claim 1, wherein the plurality of resources includes, at least one of, one or more files, a memory, a Central Processing Unit (CPU) core, and a bandwidth.
9. A system (104) for executing at least one task on a distributed computing cluster, the system (104) comprising:
a transceiver unit (312), configured to, receive, one or more tasks created by a user;
a deployment unit (314), configured to, deploy, the one or more tasks on the distributed computing cluster;
a resource allocation unit (316), configured to, dynamically allocate, a plurality of resources for the one or more tasks deployed on the distributed computing cluster; and
an execution unit (318), configured to, execute, the one or more tasks in parallel across the distributed computing cluster utilizing the allocated plurality of resources.
10. The system (104) as claimed in claim 9, wherein the distributed computing cluster includes a plurality of applications.
11. The system (104) as claimed in claim 9, wherein an interface unit (202) of the system (104), allows the user to manage and monitor the one or more tasks.
12. The system (104) as claimed in claim 9, wherein the interface unit (202) is further configured to allow the user to create the one or more tasks by providing the user with one or more parameters, thereby allowing the user to select the one or more parameters to create the one or more tasks.
13. The system (104) as claimed in claim 9, wherein the one or more parameters includes, defining one or more task requirements and applying filters for selections of one or more logics.
14. The system (104) as claimed in claim 9, wherein the execution unit (318) is configured to, execute, the one or more tasks in parallel across the distributed computing cluster utilizing the allocated plurality of resources, by:
dividing, the one or more tasks into a plurality of sub-tasks;
executing, the plurality of sub-tasks parallelly utilizing a plurality of servers; and
aggregating and storing, in a database (320), the results pertaining to the one or more tasks and associated the plurality of sub-tasks.
15. The system (104) as claimed in claim 14, wherein the results are visually presented to a user on a User Equipment (UE) (101) for further analysis.
16. The system (104) as claimed in claim 9, wherein the plurality of resources includes, at least one of, one or more files, a memory, a Central Processing Unit (CPU) core, and a bandwidth.
17. A User Equipment (UE) (101), comprising:
one or more primary processors (402) communicatively coupled to one or more processors (302), the one or more primary processors (402) coupled with a memory (304), wherein said memory (304) stores instructions which when executed by the one or more primary processors (402) causes the UE (101) to:
creating, one or more tasks based on input received from a user;
displaying, results pertaining to one or more tasks and associated plurality of sub-tasks to a user;
wherein the one or more processors (101) is configured to perform the steps as claimed in claim 1.
| # | Name | Date |
|---|---|---|
| 1 | 202321048723-STATEMENT OF UNDERTAKING (FORM 3) [19-07-2023(online)].pdf | 2023-07-19 |
| 2 | 202321048723-PROVISIONAL SPECIFICATION [19-07-2023(online)].pdf | 2023-07-19 |
| 3 | 202321048723-FORM 1 [19-07-2023(online)].pdf | 2023-07-19 |
| 4 | 202321048723-FIGURE OF ABSTRACT [19-07-2023(online)].pdf | 2023-07-19 |
| 5 | 202321048723-DRAWINGS [19-07-2023(online)].pdf | 2023-07-19 |
| 6 | 202321048723-DECLARATION OF INVENTORSHIP (FORM 5) [19-07-2023(online)].pdf | 2023-07-19 |
| 7 | 202321048723-FORM-26 [03-10-2023(online)].pdf | 2023-10-03 |
| 8 | 202321048723-Proof of Right [08-01-2024(online)].pdf | 2024-01-08 |
| 9 | 202321048723-DRAWING [18-07-2024(online)].pdf | 2024-07-18 |
| 10 | 202321048723-COMPLETE SPECIFICATION [18-07-2024(online)].pdf | 2024-07-18 |
| 11 | Abstract-1.jpg | 2024-09-28 |
| 12 | 202321048723-Power of Attorney [05-11-2024(online)].pdf | 2024-11-05 |
| 13 | 202321048723-Form 1 (Submitted on date of filing) [05-11-2024(online)].pdf | 2024-11-05 |
| 14 | 202321048723-Covering Letter [05-11-2024(online)].pdf | 2024-11-05 |
| 15 | 202321048723-CERTIFIED COPIES TRANSMISSION TO IB [05-11-2024(online)].pdf | 2024-11-05 |
| 16 | 202321048723-FORM 3 [03-12-2024(online)].pdf | 2024-12-03 |
| 17 | 202321048723-FORM 18 [20-03-2025(online)].pdf | 2025-03-20 |