Sign In to Follow Application
View All Documents & Correspondence

Methods And Systems For Multi Objective Workflow Scheduling To Serverless Architecture In A Multi Cloud Environment

Abstract: ABSTRACT METHODS AND SYSTEMS FOR MULTI-OBJECTIVE WORKFLOW SCHEDULING TO SERVERLESS ARCHITECTURE IN A MULTI-CLOUD ENVIRONMENT The disclosure relates generally to methods and systems for multi-objective workflow scheduling to serverless architecture in a multi-cloud environment. Generating an optimal mapping scheme for heterogeneous tasks of the complex task workflow in the multi-cloud environment is always a challenge. The present disclosure make use of the serverless platforms in conjunction with the storage services for the optimal mapping of the task workflows to the multi-cloud environment using the particle swarm optimization (PSO) algorithm. In the present disclosure, each of the tasks of the application are characterized to determine one or more compute requirements, and one or more input/output (I/O) requirements. Furthermore, a compute capacity of each of the plurality of serverless computing instances, and bandwidth measurements are determined. Then, the optimized task workflow is generated, using the PSO technique by minimizing a multi-objective optimization function. [To be published with FIG. 5]

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
30 August 2023
Publication Number
10/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

Tata Consultancy Services Limited
Nirmal Building, 9th Floor, Nariman Point Mumbai Maharashtra India 400021

Inventors

1. RAMESH, Manju
Tata Consultancy Services Limited Yantra Park -(STPI), 2nd Pokharan Road, , Opp HRD Voltas Center,Subash Nagar Mumbai Maharashtra India 400601
2. CHAHAL, Dheeraj
Tata Consultancy Services Limited Bldg.No.7-Unit No- 402,501,601, 701,(floors 4th - 7th) , Commerzone Survey No.144/145 , Samrat Ashok Path,Off Airport Road,Yerwada Pune Maharashtra India 411006
3. PHALAK, Chetan Dnyandeo
Tata Consultancy Services Limited Air-India Building 11th Floor , Nariman Point Mumbai Maharashtra India 400021
4. SINGHAL, Rekha
Tata Consultancy Services Limited 9th Floor to 15th Floor, Olympus, Hiranandani Estate, Patlipada , Plot No.C, Village Kavesar Thane Maharashtra India 400607

Specification

FORM 2 THE PATENTS ACT, 1970 (39 of 1970) & THE PATENT RULES, 2003 COMPLETE SPECIFICATION (See Section 10 and Rule 13) Title of invention: METHODS AND SYSTEMS FOR MULTI-OBJECTIVE WORKFLOW SCHEDULING TO SERVERLESS ARCHITECTURE IN A MULTI-CLOUD ENVIRONMENT Applicant Tata Consultancy Services Limited A company Incorporated in India under the Companies Act, 1956 Having address: Nirmal Building, 9th floor, Nariman point, Mumbai 400021, Maharashtra, India Preamble to the description: The following specification particularly describes the invention and the manner in which it is to be performed 2 TECHNICAL FIELD [001] The disclosure herein generally relates to the field of multi-cloud environment, and more specifically to methods and systems for multi-objective workflow scheduling to serverless architecture in a multi-cloud environment. 5 BACKGROUND [002] Multi-cloud deployment is emerging as a preferred choice for the deployment of complex task workflows for price competitiveness and freedom from vendor lock-in. Typically, the complex task workflows consist of multiple tasks represented as a directed acyclic graph (DAG). Optimal deployment of such 10 task workflows on a multi-cloud environment using multiple services requires a judicious selection to minimize an overall cost of deployment. However, finding an optimal mapping scheme for heterogeneous tasks of the complex task workflow in the multi-cloud environment is a challenge. [003] Furthermore, each participating cloud service provider (CSP) in the 15 multi-cloud environment has a unique cost model and a maximum deliverable performance. Hence, exploration of the mapping scheme for the chosen service is always a technical challenge. Most conventional algorithms, frameworks, and tools that schedule complex tasks workflows on multi-cloud environments make use of virtual machines (VMs) available as Infrastructure-as-a-Service (IaaS). However, 20 the use of scalable and cost-effective serverless platforms for scheduling complex task workflows in the multi-cloud environment is limited and inefficient in minimizing the overall cost of deployment. SUMMARY 25 [004] Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems. [005] In an aspect, a processor-implemented method for multi-objective workflow scheduling to serverless architecture in a multi-cloud environment is 30 provided. The method including the steps of: receiving a plurality of serverless 3 computing instances and a plurality of storage services available in a plurality of cloud service providers of a multi-cloud serverless environment, wherein the plurality of serverless computing instances is of a heterogeneous configuration; receiving a task workflow of an application to be executed in the multi-cloud serverless environment, wherein the task workflow comprises a plurality of tasks 5 and one or more dependencies among the plurality of tasks, and wherein the task workflow is received in a directed acyclic graph (DAG) structure; determining (i) one or more compute requirements, and (ii) one or more input/output (I/O) requirements required for each of the plurality of tasks present in the task workflow, based on the one or more dependencies among the plurality of tasks; determining 10 (i) a compute capacity of each of the plurality of serverless computing instances, based on a number of cores and a memory size defined in a configuration of an associated serverless computing instances, and (ii) bandwidth measurements between each of the plurality of storage services and each of the plurality of serverless computing instances, based on an I/O latency and one or more network 15 overheads of the multi-cloud serverless environment, wherein the compute capacity is determined in terms of million instructions per second (MIPS); and generating an optimized task workflow of the application for deployment in the multi-cloud serverless environment from the task workflow, based on (i) the one or more compute requirements and the one or more input/output (I/O) requirements required 20 for each task present in the task workflow, (ii) the compute capacity of each of the plurality of serverless computing instances, and (iii) the bandwidth measurements between each of the plurality of storage services and each of the plurality of serverless computing instances, using a particle swarm optimization (PSO) technique and by minimizing a multi-objective optimization function, wherein the 25 optimized task workflow comprises an optimal mapping of each task of the plurality of tasks with an associated serverless computing instance of the plurality of serverless computing instances, and an associated storage service of the plurality of storage services. [006] In another aspect, a system for multi-objective workflow scheduling 30 to serverless architecture in a multi-cloud environment is provided. The system 4 includes: a memory storing instructions; one or more Input/Output (I/O) interfaces; and one or more hardware processors coupled to the memory via the one or more I/O interfaces, wherein the one or more hardware processors are configured by the instructions to: receive a plurality of serverless computing instances and a plurality of storage services available in a plurality of cloud service providers of a multi-5 cloud serverless environment, wherein the plurality of serverless computing instances is of a heterogeneous configuration; receive a task workflow of an application to be executed in the multi-cloud serverless environment, wherein the task workflow comprises a plurality of tasks and one or more dependencies among the plurality of tasks, and wherein the task workflow is received in a directed acyclic 10 graph (DAG) structure; determine (i) one or more compute requirements, and (ii) one or more input/output (I/O) requirements, required for each of the plurality of tasks present in the task workflow, based on the one or more dependencies among the plurality of tasks; determine (i) a compute capacity of each of the plurality of serverless computing instances, based on a number of cores and a memory size 15 defined in a configuration of an associated serverless computing instances, and (ii) bandwidth measurements between each of the plurality of storage services and each of the plurality of serverless computing instances, based on an I/O latency and one or more network overheads of the multi-cloud serverless environment, wherein the compute capacity is determined in terms of million instructions per second (MIPS); 20 and generate an optimized task workflow of the application for deployment in the multi-cloud serverless environment from the task workflow, based on (i) the one or more compute requirements and the one or more input/output (I/O) requirements required for each task present in the task workflow, (ii) the compute capacity of each of the plurality of serverless computing instances, and (iii) the bandwidth 25 measurements between each of the plurality of storage services and each of the plurality of serverless computing instances, using a particle swarm optimization (PSO) technique and by minimizing a multi-objective optimization function, wherein the optimized task workflow comprises an optimal mapping of each task of the plurality of tasks with an associated serverless computing instance of the 30 5 plurality of serverless computing instances, and an associated storage service of the plurality of storage services. [007] In yet another aspect, there is provided a computer program product comprising a non-transitory computer readable medium having a computer readable program embodied therein, wherein the computer readable program, when executed 5 on a computing device, causes the computing device to: receive a plurality of serverless computing instances and a plurality of storage services available in a plurality of cloud service providers of a multi-cloud serverless environment, wherein the plurality of serverless computing instances is of a heterogeneous configuration; receive a task workflow of an application to be executed in the multi-10 cloud serverless environment, wherein the task workflow comprises a plurality of tasks and one or more dependencies among the plurality of tasks, and wherein the task workflow is received in a directed acyclic graph (DAG) structure; determine (i) one or more compute requirements, and (ii) one or more input/output (I/O) requirements, required for each of the plurality of tasks present in the task 15 workflow, based on the one or more dependencies among the plurality of tasks; determine (i) a compute capacity of each of the plurality of serverless computing instances, based on a number of cores and a memory size defined in a configuration of an associated serverless computing instances, and (ii) bandwidth measurements between each of the plurality of storage services and each of the plurality of 20 serverless computing instances, based on an I/O latency and one or more network overheads of the multi-cloud serverless environment, wherein the compute capacity is determined in terms of million instructions per second (MIPS); and generate an optimized task workflow of the application for deployment in the multi-cloud serverless environment from the task workflow, based on (i) the one or more 25 compute requirements and the one or more input/output (I/O) requirements required for each task present in the task workflow, (ii) the compute capacity of each of the plurality of serverless computing instances, and (iii) the bandwidth measurements between each of the plurality of storage services and each of the plurality of serverless computing instances, using a particle swarm optimization (PSO) 30 technique and by minimizing a multi-objective optimization function, wherein the 6 optimized task workflow comprises an optimal mapping of each task of the plurality of tasks with an associated serverless computing instance of the plurality of serverless computing instances, and an associated storage service of the plurality of storage services. [008] In an embodiment, the one or more compute requirements required 5 for each task in the task workflow is determined based on a number of million instructions (MI) present in an associated task and associated one or more dependencies, and the one or more I/O requirements required for each task in the task workflow is determined based on a number of I/O operations, a file size of the associated task and the associated one or more dependencies. 10 [009] In an embodiment, generating the optimized task workflow of the application for deployment in the multi-cloud serverless environment from the task workflow, based on (i) the one or more compute requirements and the one or more input/output (I/O) requirements required for each task present in the task workflow, (ii) the compute capacity of each of the plurality of serverless computing instances, 15 and (iii) the bandwidth measurements between each of the plurality of storage services and each of the plurality of serverless computing instances, using the particle swarm optimization (PSO) technique with the multi-objective optimization function, comprising: (a) initializing each particle of the plurality of particles in an initial iteration, with random mappings using the PSO technique to obtain an initial 20 position of each particle, wherein each particle in the random mappings represents a random mapping of each task of the plurality of tasks with the plurality of storage services and the plurality of serverless computing instances; (b) calculating a value of the multi-objective optimization function for each particle, based on (i) the initial position of an associated particle, (ii) the one or more compute requirements and 25 the one or more input/output (I/O) requirements required for each task present in the task workflow, (iii) the compute capacity of each of the plurality of serverless computing instances, and (iv) the bandwidth measurements between each of the plurality of storage services and each of the plurality of serverless computing instances, in the initial iteration; (c) determining a local best of each particle, and a 30 global best particle among the plurality of particles in the initial iteration, based on 7 the value of the multi-objective optimization function for each particle; (d) calculating one or more rate parameters of the plurality of particles in the initial iteration using an adaptive rate technique, wherein the one or more rate parameters are an inertia weight, a local learning rate and a global learning rate; (e) moving each particle of the plurality of particles in a successive iteration, to obtain a next 5 position of each particle, based on the initial position of each particle, the one or more rate parameters of the plurality of particles, the local best of each particle, and the global best particle among the plurality of particles in the initial iteration; (f) calculating the value of the multi-objective optimization function for each particle, based on (i) the next position of the associated particle, (ii) the one or more compute 10 requirements and the one or more input/output (I/O) requirements required for each task present in the task workflow, (iii) the compute capacity of each of the plurality of serverless computing instances, and (iv) the bandwidth measurements between each of the plurality of storage services and each of the plurality of serverless computing instances, in the successive iteration; (g) determining the local best of 15 each particle and the global best particle among the plurality of particles in the successive iteration, based on the value of the multi-objective optimization function for each particle in the successive iteration; (h) calculating the one or more rate parameters of the plurality of particles in the successive iteration using the adaptive rate technique, wherein the one or more rate parameters are the inertia weight, the 20 local learning rate and the global learning rate; (i) repeating the steps (e) through (h) by considering the successive iteration as the initial iteration and the next position as the initial position, until one of (i) a predefined number of iterations are completed, and (ii) a global minimum of the multi-objective optimization function, is met, to obtain a final position of the global best particle; and (j) generating the 25 optimal task workflow of the application for deployment in the multi-cloud serverless environment, based on the final position of the global best particle. [010] In an embodiment, generating the optimized task workflow of the application for deployment in the multi-cloud serverless environment from the task workflow, based on (i) the one or more compute requirements and the one or more 30 input/output (I/O) requirements required for each task present in the task workflow, 8 (ii) the compute capacity of each of the plurality of serverless computing instances, and (iii) the bandwidth measurements between each of the plurality of storage services and each of the plurality of serverless computing instances, using the particle swarm optimization (PSO) technique with the multi-objective optimization function, comprising: perturbing each particle of the plurality of particles from an 5 associated current position to the next position, if a change in the value of the multi-objective optimization function of the global best particle after a predefined number of iterations is less than a predefined threshold, and wherein a number of perturbations is less than a predefined perturbation threshold. [011] In an embodiment, the multi-objective optimization function is a 10 weighted sum of a makespan and an execution cost of the task workflow of the application, wherein the makespan is defined as a total execution time of the task workflow, and the execution cost is defined as a total cost to execute the task workflow on the multi-cloud serverless environment based on a predefined cost model. 15 [012] It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed. BRIEF DESCRIPTION OF THE DRAWINGS 20 [013] The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles: [014] FIG. 1 is an exemplary block diagram of a system for multi-objective workflow scheduling to serverless architecture in a multi-cloud 25 environment, in accordance with some embodiments of the present disclosure. [015] FIG. 2 illustrates exemplary flow diagram of a processor-implemented method for multi-objective workflow scheduling to serverless architecture in a multi-cloud environment, using the system of FIG. 1, in accordance with some embodiments of the present disclosure. 30 9 [016] FIG. 3 illustrates an exemplary task workflow in a direct acyclic graph (DAG) structure to be deployed in a multi-cloud serverless environment, in accordance with some embodiments of the present disclosure. [017] FIGS. 4A and 4B illustrates exemplary flow diagrams for generating an optimized task workflow of an application for deployment in a multi-cloud 5 serverless environment from a task workflow, using the system of FIG. 1, in accordance with some embodiments of the present disclosure. [018] FIG. 5 illustrates an exemplary multi-cloud serverless environment along with an optimal mapping of a task workflow, in accordance with some embodiments of the present disclosure. 10 [019] FIG. 6 is a graph showing the convergence of the makespan and the execution cost function of the different particle positions for Digital Drawing application, in accordance with some embodiments of the present disclosure. [020] FIG. 7 is a graph showing the makespan and the execution cost for the different particle positions of the swarm at the end of 1, 100, 200, 350 iterations 15 for the Digital Drawing application with the weightage of 0.9 and 0.1 for the makespan and the execution cost respectively, in accordance with some embodiments of the present disclosure. [021] FIG. 8 is a graph showing the makespan and the execution cost comparison of mapping generated using simulation and its execution on actual 20 cloud instances for the Digital Drawing application, in accordance with some embodiments of the present disclosure. [022] FIG. 9 is a graph showing the makespan and the execution cost comparison of mapping generated using simulation and its execution on actual cloud instances for the Deep Reader application, in accordance with some 25 embodiments of the present disclosure. DETAILED DESCRIPTION OF EMBODIMENTS [023] Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number 30 identifies the figure in which the reference number first appears. Wherever 10 convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the scope of the disclosed embodiments. [024] Many enterprises are adopting multi-cloud workload deployment 5 strategies primarily to leverage vendor-specific capabilities of cloud services. The other motivation for usage of a multi-cloud environment is to maximize performance, minimize the cost of deployment, and avoid vendor lock-in. Use of services from multiple clouds for computing, storage, and networking can result in more than 45% savings in the cost and 26% savings for the complex deployments 10 involving databases and serverless services. [025] Workflow scheduling in the multi-cloud environment is a non-deterministic polynomial time hard (NP-hard) optimization problem. Provisioning a workload consisting of a flow of multiple tasks across multiple clouds results in higher availability, improved fault tolerance, and freedom from vendor lock-in. 15 However, uniqueness in multiple vendors’ Cloud services, tools, and frameworks results in increased deployment complexity. Additionally, one of the deterrents to multi-cloud deployment is the complexity of the models for the cost estimation. [026] Cloud service providers (CSPs) provide compute services via Infrastructure-as-a-Service (IaaS) such as Amazon Web Services (AWS), Azure 20 cloud services, Google Cloud Platform (GCP), etc. The virtual instances are billed for the duration they are acquired irrespective of whether they are idle or used. IaaS proves to be a good option for long running and consistent workloads. However, their use may not be cost-effective for bursty workloads and inconsistent resource demands. This is because over-provisioning results in idle resources while under-25 provisioning constitutes performance degradation. [027] One of the popular offerings from cloud vendors is Function-as-a-Service (FaaS) for the serverless deployment of workloads such as AWS Lambda, Azure functions, and GCP functions. FaaS is a highly scalable solution particularly useful for dynamically varying workloads. Unlike virtual machines (VMs) 30 available as part of the IaaS offering on the cloud, users are billed based on the 11 actual usage of the FaaS instances. Additionally, FaaS is highly scalable, and new instances can be spawned in a few milliseconds as and when a request for execution is generated. This characteristic is suitable for workflows represented as a direct acyclic graph (DAG) structure where the number of concurrent jobs change with time. Contrary to this, starting and stopping a new VM takes a few seconds. Hence, 5 changes in the state of VMs with changing concurrent job number is costly and results in a higher latency. The pay-per-use cost model of the FaaS results in a cost-effective deployment of the workloads. However, one of the drawbacks of the FaaS instances is statelessness. For any persistent storage requirements, users can avail of the cloud storage services such as AWS S3, Azure Blob Storage, GCP cloud 10 storage etc. in conjunction with the FaaS. Additionally, a peer-to-peer communication is not possible in the FaaS. Hence, to communicate messages or data from one FaaS instance to another, a storage service or file system is used. [028] FaaS platforms can be exploited in the multi-cloud environment for deploying artificial intelligence (AI) workflows to achieve high-performance and 15 cost-effective deployment. The optimal deployment of such workloads necessitates the use of cloud storage services for persistent storage. Also, it requires finding the optimal configuration of the computing instances for each individual task in the DAG to avoid over-provisioning or under-provisioning. [029] Very few techniques exist in the art for the optimal deployment of 20 tasks in the DAG of complex workflows on serverless platforms in the multi-cloud environment. Furthermore, conventional techniques that proposes a methodology for deploying workflow tasks on the serverless platforms in conjunction with storage services from multiple clouds are very limited. Researchers have proposed frameworks and approaches to map complex workflows to VMs specifically in a 25 single cloud. However, mapping such workflows to serverless platforms and cloud storage services in the multi-cloud environment offers unique challenges such as: • The computing and storage in the serverless architecture are not co-located like VMs. Hence, the makespan of the workflow is significantly affected not only by the computing resources but also by the data transfer rate 30 between instances and storage services of the multiple clouds. However, the 12 cost model and service features such as network bandwidth, computing power, etc. vary significantly across multiple CSPs and are thus difficult to simulate. • Finding the processing power of serverless instances is a challenge. For example, the CSPs do not provide million instructions per second (MIPS) 5 that can be executed by the serverless instances for a given configuration. [030] The primary objective of the scheduling algorithm in the multi-cloud context is to find the most suitable service resource configuration which improves the quality of service (QoS) parameters such as makespan, execution time, reliability, etc. Researchers have presented various task scheduling algorithms 10 specifically for the multi-cloud. Meta-heuristics algorithms such as Genetic Algorithm (GA), Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO), etc., and their variants have been successfully used for finding optimal solutions for NP problems. Further, search-based methods for multi-cloud configuration are explored in the art. Many algorithms have been proposed for 15 mapping complex workflows to the VMs available on the cloud as an IaaS offering. [031] The PSO based task scheduling has been proposed by many researchers in the art. The cost and makespan-aware scheduling of workflows using the IaaS has been proposed in the art. A Multi-objective optimization in the multi-cloud IaaS environment using a variant of PSO is also discussed in the art. A 20 resource and deadline-aware scheduling using PSO is presented. The energy-aware task scheduling using the PSO algorithm for heterogeneous cloud is presented. A cloud task scheduling framework based on modified PSO for conflicting objectives is also proposed in the art. Most of these approaches either propose task scheduling solutions specific to IaaS or do not consider the multi-cloud and the multi-objective 25 optimization of conflicting parameters such as time and cost. [032] Many meta-heuristic approaches have been proposed earlier such as GA, ACO, and PSO to solve task scheduling problems. However, PSO is the most popular of all due to ease of implementation, consistent performance, and robustness control parameters. The PSO generates a better schedule as compared to 30 GA. The PSO algorithm suffers from the disadvantage that many times it converges 13 fast to local minima. However, many improvements have been proposed lately to find a global minima in the art. [033] Deployment of deep learning models using a serverless platform has been presented in the art. However, these works are focused on studying the feasibility of a serverless platform for deep learning inference. A greedy algorithm-5 based approach for scheduling multi-function serverless applications on a hybrid cloud is also presented in the art. The use of a serverless platform for end-to-end machine-learning (ML) workflows has been presented using the cirrus framework. An in-depth investigation of the literature suggests that there are numerous variants of PSO for task scheduling on the cloud but most of these implementations are 10 focused on mapping to VMs. [034] The present disclosure solves the technical problems in the art with the methods and systems for multi-objective workflow scheduling to serverless architecture in the multi-cloud environment. The methods and systems of the present disclosure make use of the adaptive rate particle swarm optimization (PSO) 15 algorithm for searching the optimal configuration of the serverless computing instances such as FaaS instances to deploy task workflows such as AI task workflows. Further, the present disclosure makes use of the serverless platforms in conjunction with the storage services for the optimal mapping of the task workflows to the multi-cloud environment using the PSO. The mapping optimization of the 20 present disclosure includes the overhead due to the use of cloud storage for persistent storage and communication between serverless instances running DAG tasks. The proposed disclosure is highly suitable for batch workflows where the number of concurrent jobs changes and hence resource requirements vary during the execution of the workload. 25 [035] Referring now to the drawings, and more particularly to FIG. 1 through FIG. 7, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary systems and/or methods. 30 14 [036] FIG. 1 is an exemplary block diagram of a system 100 for multi-objective workflow scheduling to serverless architecture in a multi-cloud environment, in accordance with some embodiments of the present disclosure. In an embodiment, the system 100 includes or is otherwise in communication with one or more hardware processors 104, communication interface device(s) or 5 input/output (I/O) interface(s) 106, and one or more data storage devices or memory 102 operatively coupled to the one or more hardware processors 104. The one or more hardware processors 104, the memory 102, and the I/O interface(s) 106 may be coupled to a system bus 108 or a similar mechanism. [037] The I/O interface(s) 106 may include a variety of software and 10 hardware interfaces, for example, a web interface, a graphical user interface, and the like. The I/O interface(s) 106 may include a variety of software and hardware interfaces, for example, interfaces for peripheral device(s), such as a keyboard, a mouse, an external memory, a plurality of sensor devices, a printer and the like. Further, the I/O interface(s) 106 may enable the system 100 to communicate with 15 other devices, such as web servers and external databases. [038] The I/O interface(s) 106 can facilitate multiple communications within a wide variety of networks and protocol types, including wired networks, for example, local area network (LAN), cable, etc., and wireless networks, such as Wireless LAN (WLAN), cellular, or satellite. For the purpose, the I/O interface(s) 20 106 may include one or more ports for connecting a number of computing systems with one another or to another server computer. Further, the I/O interface(s) 106 may include one or more ports for connecting a number of devices to one another or to another server. [039] The one or more hardware processors 104 may be implemented as 25 one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the one or more hardware processors 104 are configured to fetch and execute computer-readable instructions stored in the memory 102. In the context of 30 the present disclosure, the expressions ‘processors’ and ‘hardware processors’ may 15 be used interchangeably. In an embodiment, the system 100 can be implemented in a variety of computing systems, such as laptop computers, portable computers, notebooks, hand-held devices, workstations, mainframe computers, servers, a network cloud and the like. [040] The memory 102 may include any computer-readable medium 5 known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. In an embodiment, the memory 102 includes a plurality of modules 102a and a repository 10 102b for storing data processed, received, and generated by one or more of the plurality of modules 102a. The plurality of modules 102a may include routines, programs, objects, components, data structures, and so on, which perform particular tasks or implement particular abstract data types. [041] The plurality of modules 102a may include programs or computer-15 readable instructions or coded instructions that supplement applications or functions performed by the system 100. The plurality of modules 102a may also be used as, signal processor(s), state machine(s), logic circuitries, and/or any other device or component that manipulates signals based on operational instructions. Further, the plurality of modules 102a can be used by hardware, by computer-20 readable instructions executed by the one or more hardware processors 104, or by a combination thereof. In an embodiment, the plurality of modules 102a can include various sub-modules (not shown in FIG. 1). Further, the memory 102 may include information pertaining to input(s)/output(s) of each step performed by the processor(s) 104 of the system 100 and methods of the present disclosure. 25 [042] The repository 102b may include a database or a data engine. Further, the repository 102b amongst other things, may serve as a database or includes a plurality of databases for storing the data that is processed, received, or generated as a result of the execution of the plurality of modules 102a. Although the repository 102b is shown internal to the system 100, it will be noted that, in 30 alternate embodiments, the repository 102b can also be implemented external to the 16 system 100, where the repository 102b may be stored within an external database (not shown in FIG. 1) communicatively coupled to the system 100. The data contained within such external database may be periodically updated. For example, data may be added into the external database and/or existing data may be modified and/or non-useful data may be deleted from the external database. In one example, 5 the data may be stored in an external system, such as a Lightweight Directory Access Protocol (LDAP) directory and a Relational Database Management System (RDBMS). In another embodiment, the data stored in the repository 102b may be distributed between the system 100 and the external database. [043] Referring to FIG. 2, components and functionalities of the system 10 100 are described in accordance with an example embodiment of the present disclosure. For example, FIG. 2 illustrates exemplary flow diagram of a processor-implemented method 200 for multi-objective workflow scheduling to serverless architecture in a multi-cloud environment, using the system 100 of FIG. 1, in accordance with some embodiments of the present disclosure. Although steps of 15 the method 200 including process steps, method steps, techniques or the like may be described in a sequential order, such processes, methods, and techniques may be configured to work in alternate orders. In other words, any sequence or order of steps that may be described does not necessarily indicate a requirement that the steps be performed in that order. The steps of processes described herein may be 20 performed in any practical order. Further, some steps may be performed simultaneously, or some steps may be performed alone or independently. [044] At step 202 of the method 200, the one or more hardware processors 104 of the system 100 are configured to receive a plurality of serverless computing instances and a plurality of storage services available in a plurality of cloud service 25 providers of the multi-cloud serverless environment. The plurality of cloud service providers of the multi-cloud serverless environment include but are not limited to Amazon Web Services (AWS), Azure cloud services, and Google Cloud Platform (GCP). [045] In an embodiment, the plurality of serverless computing instances is 30 of a heterogeneous configuration which mean the configuration of each serverless 17 computing instance may differ from each other including but are not limited to number of central processing unit (CPU) cores, memory size, CPU clock frequency and so on. Though the present disclosure considers the heterogeneous configuration, the scope is not limited to a homogeneous configuration, the heterogeneous configuration, or a combination thereof. 5 [046] At step 204 of the method 200, the one or more hardware processors 104 of the system 100 are configured to receive a task workflow of an application to be executed in the multi-cloud serverless environment. As explained above the task workflow is received in the directed acyclic graph (DAG) structure. The task workflow includes a plurality of tasks and one or more dependencies among the 10 plurality of tasks. The one or more dependencies includes the dependencies of each task on the other tasks of the plurality of tasks. In an embodiment, some tasks may have dependencies on other tasks (referred as dependent tasks) while some other tasks may not have any dependencies at all (referred as independent tasks). [047] FIG. 3 illustrates an exemplary task workflow in the direct acyclic 15 graph (DAG) structure to be deployed in the multi-cloud serverless environment, in accordance with some embodiments of the present disclosure. As shown in FIG. 3, the exemplary task workflow consists of 50 tasks along with their priorities and dependencies. [048] At step 206 of the method 200, the one or more hardware processors 20 104 of the system 100 are configured to determine (i) one or more compute requirements and (ii) one or more input/output (I/O) requirements, required for each task present in the task workflow. The one or more compute requirements and the one or more input/output (I/O) requirements represent a set of standards to benchmark the tasks present in the task workflow based on their dependencies. 25 [049] In an embodiment, the one or more compute requirements required for each task in the task workflow is determined based on a number of million instructions (MI) present in an associated task and associated dependent tasks (one or more dependencies) as received in the task workflow. In an embodiment, the one or more I/O requirements required for each task in the task workflow is determined 30 18 based on a number of I/O operations present in the associated task, a file size of the associated task, and the associated dependent tasks (one or more dependencies). [050] At step 208 of the method 200, the one or more hardware processors 104 of the system 100 are configured to determine (i) a compute capacity of each of the plurality of serverless computing instances, and (ii) bandwidth measurements 5 between each of the plurality of storage services and each of the plurality of serverless computing instances. The compute capacity and the bandwidth measurements represent the set of standards to benchmark (re-factoring) the serverless computing instances of the heterogeneous configuration and the storage services respectively. 10 [051] In an embodiment, the compute capacity of each of the plurality of serverless computing instance is determined based on a number of cores and a memory size defined in a configuration of an associated serverless computing instances. In an embodiment, the bandwidth measurements between each of the plurality of storage services and each of the plurality of serverless computing 15 instances, is determined based on an I/O latency and one or more network overheads of the multi-cloud serverless environment. In an embodiment, the compute capacity is determined in terms of million instructions per second (MIPS) to map with the number of million instructions (MI) present in the associated task. [052] At step 210 of the method 200, the one or more hardware processors 20 104 of the system 100 are configured to generate an optimized task workflow of the application for deployment in the multi-cloud serverless environment from the task workflow. The optimized task workflow includes an optimal mapping of each task of the plurality of tasks with an associated serverless computing instance of the plurality of serverless computing instances, and an associated storage service of the 25 plurality of storage services. The optimized task workflow is deployed in the multi-cloud serverless environment to minimize the overall deployment cost. [053] The optimized task workflow is generated based on the benchmarking done at the earlier steps. More specifically, the optimized task workflow is generated based on (i) the one or more compute requirements and the 30 one or more input/output (I/O) requirements required for each task present in the 19 task workflow determined at step 206 of the method 200, (ii) the compute capacity of each of the plurality of serverless computing instances determined at step 208 of the method 200, and (iii) the bandwidth measurements between each of the plurality of storage services and each of the plurality of serverless computing instances determined at step 208 of the method 200. 5 [054] An adaptive rate-based particle swarm optimization (PSO) technique is employed by minimizing a multi-objective optimization function, to generate the optimized task workflow. The multi-objective optimization function is defined as a weighted sum of a makespan and an execution cost of the task workflow of the application. The makespan is defined as a total execution time of 10 the task workflow. The execution cost is defined as a total cost to execute the task workflow on the multi-cloud serverless environment based on a predefined cost model of the cloud service provider. [055] Let there be 𝑚𝑚 cloud service providers (CSPs) in the multi-cloud environment denoted by 𝐶𝐶={𝐶𝐶1,𝐶𝐶2,…,𝐶𝐶𝑚𝑚}. Let there be 𝑘𝑘 serverless instances of 15 heterogeneous configurations denoted by 𝐼𝐼={𝐼𝐼1,𝐼𝐼2,…,𝐼𝐼𝑘𝑘} and 𝑠𝑠 storage services 𝑆𝑆={𝑆𝑆1,𝑆𝑆2,…,𝑆𝑆𝑠𝑠}. available on these clouds. The DAG 𝑊𝑊 (𝑇𝑇,𝐷𝐷)of a workflow consisting of 𝑛𝑛 tasks is defined where 𝑇𝑇 denotes a collection of tasks and 𝐷𝐷 represents the dependency between these tasks. The objective of the present disclosure is to map the tasks in set 𝑇𝑇 to the serverless instances by 𝐼𝐼 and storage 20 services 𝑆𝑆 available with 𝑚𝑚 CSPs and to minimize the multi-objective optimization function. [056] The multi-objective optimization function (𝑓𝑓(𝐸𝐸𝑡𝑡,𝐶𝐶)) is mathematically represented as in equation 1: 𝑓𝑓(𝐸𝐸𝑡𝑡,𝐶𝐶)=𝑤𝑤1∗𝐸𝐸𝑡𝑡+𝑤𝑤2∗𝐶𝐶 ----------------(1) 25 where functions 𝐸𝐸𝑡𝑡 and 𝐶𝐶 represent the makespan of the task workflow and the cost of execution respectively. 𝑤𝑤1 and 𝑤𝑤2 are predefined weights based on individual function relevance. The objective functions are normalized and weights 𝑤𝑤1 and 𝑤𝑤2 are chosen such that their sum is 1. The Quality of Service (QoS) is measured as the deadline of execution and the budget is used as a constraint. 30 20 [057] Makespan (𝐸𝐸𝑡𝑡) computation: Unlike IaaS, FaaS is stateless and any data transfer between two functions takes place using communication channels such as object storage, file systems, databases, etc. The execution time due to each task in the DAG is due to multiple factors as stated below in equation 2: 𝑇𝑇𝑖𝑖= 𝑇𝑇𝑒𝑒𝑒𝑒𝑒𝑒,𝑗𝑗+𝑇𝑇𝑡𝑡𝑡𝑡𝑡𝑡𝑡 𝑡 +𝑇𝑇𝑐𝑐𝑐𝑐𝑐 𝑐𝑐 ------------- (2) 5 Where 𝑇𝑇𝑒𝑒𝑒𝑒𝑒𝑒,𝑗𝑗 represents the end time of the previous task. 𝑇𝑇𝑡𝑡𝑡𝑡𝑡𝑡𝑡 𝑡 is the network and storage service latency while 𝑇𝑇𝑐𝑐𝑐𝑐𝑐 𝑐𝑐 is the actual computation time of the task. [058] In the multi-cloud environment, data can be stored in any of the participating clouds and their services. The upload and download bandwidth provided by these clouds also varies. Moreover, the storage service characteristics 10 of these clouds also vary which results in different data retrieval and storage time. The transmission time 𝑇𝑇𝑡𝑡𝑡𝑡𝑡𝑡𝑡 𝑡 of the data from task 𝑗𝑗 to a storage device on cloud 𝑚𝑚 and back to serverless function can be represented as in equation 3: 𝑇𝑇𝑡𝑡𝑡𝑡𝑡𝑡𝑡 𝑡 = 𝑇𝑇𝑗𝑗,𝑚𝑚+𝑇𝑇𝑠𝑠+𝑇𝑇𝑚𝑚,𝑖𝑖 ----------------- (3) Where 𝑇𝑇𝑗𝑗,𝑚𝑚 is a function of storage service upload latency which is dependent on 15 the network bandwidth (𝐵𝐵(𝑗𝑗,𝑚𝑚)) between the function and the storage service on the same or different cloud. Likewise, 𝑇𝑇𝑚𝑚,𝑖𝑖 represents the time to load data in the function 𝑖𝑖 which was saved by the previous function 𝑗𝑗 in the storage service. 𝑇𝑇𝑚𝑚,𝑖𝑖 is dependent on the bandwidth (𝐵𝐵(𝑚𝑚,𝑖𝑖)) between the function and the storage service. Also, both 𝑇𝑇𝑗𝑗,𝑚𝑚 and 𝑇𝑇𝑚𝑚,𝑖𝑖 depend on Data 𝐷𝐷(𝑗𝑗,𝑖𝑖) transferred between functions and 20 storage service. 𝑇𝑇𝑗𝑗,𝑚𝑚=𝐷𝐷(𝑗𝑗,𝑖𝑖)𝐵𝐵(𝑗𝑗,𝑚𝑚)⁄ 𝑇𝑇𝑚𝑚,𝑖𝑖=𝐷𝐷(𝑗𝑗,𝑖𝑖)𝐵𝐵(𝑚𝑚,𝑖𝑖)⁄ [059] The delay 𝑇𝑇𝑠𝑠 is due to the time spent on storing and saving within the cloud service. Using this, the makespan (𝐸𝐸𝑡𝑡) is calculated as: 25 𝐸𝐸𝑡𝑡=max (𝑇𝑇𝑒𝑒𝑒𝑒𝑒 ) Here, 𝑇𝑇𝑒𝑒𝑒𝑒𝑒 is the last job in the DAG. [060] Execution cost (𝐶𝐶 ) computation: The total cost of executing the task 𝑘𝑘 on an instance 𝑗𝑗 is expressed as in equation 4: 𝐶𝐶𝑘𝑘,𝑗𝑗= 𝐶𝐶𝑒𝑒+𝐶𝐶𝑛 +𝐶𝐶𝑠𝑠 -------------------------- (4) 30 21 Here 𝐶𝐶𝑒𝑒 represents the cost due to the total memory reserved for the serverless computing instance and the execution time of the function on that serverless computing instance. It varies across serverless instances from different CSPs. 𝐶𝐶𝑠𝑠 is the cost of storage service used to save the intermediate data generated by a serverless instance and consumed by another. Also, 𝐶𝐶𝑛 is the cost to transfer data 5 from service A to service B and it depends on whether services are running in two regions or residing in the same region and zone. [061] Compute cost (𝐶𝐶𝑒𝑒): The compute cost (𝐶𝐶𝑒𝑒) is due to the execution time on the function and the configuration of the serverless instance. The cost for one invocation in a serverless function is: 10 𝐶𝐶𝑒𝑒=𝐸𝐸𝑠𝑠𝑠 ∗𝑀𝑀𝐺𝐺𝐺𝐺∗𝑅𝑅𝑚𝑚𝑚 𝑚𝑚 where 𝐸𝐸𝑠𝑠𝑠 is the execution time of the function, 𝑀𝑀𝐺𝐺𝐺𝐺 is the memory configuration and 𝑅𝑅𝑚𝑚𝑚 𝑚𝑚 is the billing rate per memory-time consumed. [062] Storage cost (𝐶𝐶𝑠𝑠): The object storage cost 𝐶𝐶𝑠𝑠 defines the cost model of the various clouds and is proportional to the size of the data stored and operations 15 performed (read, write, delete) on it. Also, the billing rate varies across the different CSPs. 𝐶𝐶𝑆𝑆𝑆 _𝑖𝑖=𝐷𝐷𝑆𝑆𝑡𝑡𝑖𝑖∗𝑅𝑅𝑆𝑆𝑡𝑡𝑖𝑖+􀷍𝑛𝑛_𝑠𝑠𝑡𝑡𝑞𝑞_𝑠𝑠𝑠 _𝑖𝑖𝑜 𝑜 𝑞𝑞=1∗𝑅𝑅𝑞𝑞_𝑠𝑠𝑠 _𝑖𝑖 where 𝐷𝐷𝑆𝑆𝑡𝑡𝑖𝑖 denotes the total data size stored in cloud 𝑖𝑖 , 𝑜𝑜𝑜𝑜 is the different operations possible on storage service, 𝑛𝑛_𝑠𝑠𝑡𝑡𝑞𝑞_𝑠𝑠𝑠 _𝑖𝑖 is the number of qth operations done on the 20 storage service of the ith cloud. 𝑅𝑅𝑆𝑆𝑡𝑡𝑖𝑖 and 𝑅𝑅𝑞𝑞_𝑠𝑠𝑠 _𝑖𝑖 represent the billing rate for data stored and operations performed in the storage service in cloud 𝑖𝑖. [063] Network cost (𝐶𝐶𝑛 ): All outbound data transfers incur network charges. This includes data to other cloud providers, as well as to other regions of the same cloud provider. The total network cost (𝐶𝐶𝑛 ) for 𝐹𝐹𝑖𝑖 functions is calculated 25 as follows: 𝐶𝐶𝑁𝑁𝑁𝑁_𝑖𝑖=􀷍𝑄𝑄𝑖𝑖𝑖 ∗𝑅𝑅𝑁𝑁𝑤 𝑖𝑖 𝐷𝐷_𝑂𝑂𝑂𝑂𝑂 𝑖𝑖𝑖 𝐹𝐹𝑖𝑖𝑗𝑗=1 22 where 𝐷𝐷_𝑂𝑂𝑂𝑂𝑂 𝑖𝑖𝑖 is the average data size transferred out by the jth function in the ith cloud. 𝑄𝑄𝑖𝑖𝑖 is the number of executions of the jth function in the ith cloud. 𝑅𝑅𝑁𝑁𝑤 𝑖𝑖 denotes the billing rate per GB of data transfer. [064] Problem complexity: Let there be 𝑚𝑚 CSPs available for mapping a workflow of 𝑛𝑛 task to their serverless platforms. Each serverless platform has 𝐶𝐶 5 possible configurations. Additionally, each task use 𝑆𝑆 storage services provided by the multiple CSPs. Assumed that each workflow is executed with one storage service at a time, then the total possible mapping combinations are calculated using the equation 5: 𝑁𝑁= (𝑚𝑚∗𝐶𝐶)𝑛 ∗𝑆𝑆 --------------- (5) 10 For example, more than 14 million combinations are possible for mapping a workflow containing 7 tasks on 3 clouds using 3 configurations of serverless instances each and 3 storage services. [065] PSO is a well-known optimization technique in that which is a swarm intelligence-based approach that imitates the movement of a flock of birds. 15 A particle represents a bird in the flock. A swarm population is represented by generations of multiple particles. Each generation has multiple particles. Every particle in the generation is represented with a position, velocity, global best (𝑔𝑔𝑔 𝑔𝑔𝑔 𝑔𝑔), and the local best (𝑙𝑙𝑙 𝑙 𝑙 ). The 𝑙𝑙𝑙 𝑙 𝑙 represents the best solution of the particle itself while 𝑔𝑔𝑔 𝑔𝑔𝑔 𝑡𝑡𝑡 is the best solution among all the particles. 20 [066] Each particle in PSO is characterized by position 𝑃 and velocity 𝑣𝑣 in a 𝐷𝐷 dimensional search space. The particle velocity 𝑣𝑣 and position 𝑃 get updated after each iteration. A particle moves towards the 𝑙𝑙𝑙 𝑙 𝑙 as well as 𝑔𝑔𝑔 𝑔𝑔𝑔 𝑔𝑔. The velocity 𝑣𝑣 of a particle 𝑖𝑖 at 𝑡𝑡+1 time instant w.r.t. 𝑡𝑡 time instant is given as follows: 𝑣𝑣𝑖𝑖𝑡𝑡+1=𝑊𝑊∗𝑣𝑣𝑖𝑖𝑡𝑡+𝑟𝑟𝑙𝑙𝑖𝑖∗𝑝 𝑖𝑖∗(𝑙𝑙𝑙 𝑙 𝑙 𝑃 𝑖𝑖−𝑃 𝑖𝑖𝑡𝑡)+𝑟𝑟𝑔𝑔𝑖𝑖∗𝑞𝑞𝑖𝑖∗(𝑔𝑔𝑔 𝑔𝑔𝑔 𝑃 𝑖𝑖−𝑃 𝑖𝑖𝑡𝑡) --(6) 25 where 𝑊𝑊 denotes the inertia weight that controls the momentum of the particle. 𝑟𝑟𝑙𝑙𝑖𝑖 and 𝑟𝑟𝑔𝑔𝑖𝑖 are learning rates while 𝑝 𝑖𝑖 and 𝑞𝑞𝑖𝑖 represent random values between 0 and 1. At the next iteration, the position 𝑃 of each particle is updated as: 𝑃 𝑖𝑖𝑡𝑡+1=𝑃 𝑖𝑖𝑡𝑡+𝑣𝑣𝑖𝑖𝑡𝑡+1 23 [067] FIGS. 4A and 4B illustrates exemplary flow diagrams for generating an optimized task workflow of an application for deployment in a multi-cloud serverless environment from a task workflow, using the system 100 of FIG.1, in accordance with some embodiments of the present disclosure. As shown in FIGS. 4A and 4B, generating the optimized task workflow of the application for 5 deployment in the multi-cloud serverless environment from the task workflow is explained through steps 210a to 210j. [068] At step 210a, each particle of the plurality of particles are initialized in an initial iteration, with random mappings using the PSO technique. This initialization step results in obtaining an initial position of each particle. Each 10 particle in the random mappings represents a random mapping of each task of the plurality of tasks with the plurality of storage services and the plurality of serverless computing instances. [069] At step 210b, a value of the multi-objective optimization function for each particle is calculated. The value of the multi-objective optimization 15 function for each particle is calculated based on (i) the initial position of an associated particle, (ii) the one or more compute requirements and the one or more input/output (I/O) requirements required for each task present in the task workflow, (iii) the compute capacity of each of the plurality of serverless computing instances, and (iv) the bandwidth measurements between each of the plurality of storage 20 services and each of the plurality of serverless computing instances, in the initial iteration. [070] At step 210c, a local best of each particle and a global best particle among the plurality of particles in the initial iteration, is determined based on the value of the multi-objective optimization function for each particle calculated at 25 step 210b. The local best of each particle refers to the position of the associated particle at which the value of the multi-objective optimization function is minimum in the initial iteration. The global best particle is the particle having the least value of the multi-objective optimization function among the plurality of particles in the initial iteration. At step 210d, one or more rate parameters of the plurality of 30 particles in the initial iteration is calculated using an adaptive rate technique. The 24 one or more rate parameters are an inertia weight, a local learning rate and a global learning rate. [071] At step 210e, each particle of the plurality of particles is moved in a successive iteration, to obtain a next position of each particle. Each particle is moved based on the initial position of each particle, the one or more rate parameters 5 of the plurality of particles determined at step 210d, the local best of each particle, and the global best particle among the plurality of particles in the initial iteration determined at step 210c. [072] At step 210f, the value of the multi-objective optimization function for each particle is calculated in the successive iteration. The value of the multi-10 objective optimization function for each particle is calculated based on (i) the next position of the associated particle obtained at step 210e, (ii) the one or more compute requirements and the one or more input/output (I/O) requirements required for each task present in the task workflow, (iii) the compute capacity of each of the plurality of serverless computing instances, and (iv) the bandwidth measurements 15 between each of the plurality of storage services and each of the plurality of serverless computing instances. [073] At step 210g, the local best of each particle and the global best particle among the plurality of particles in the successive iteration, are determined based on the value of the multi-objective optimization function for each particle in 20 the successive iteration. The local best of each particle refers to the position of the associated particle at which the value of the multi-objective optimization function is minimum after the successive iteration. The global best particle is the particle having the least value of the multi-objective optimization function among the plurality of particles after the successive iteration. 25 [074] At step 210h, the one or more rate parameters of the plurality of particles are then calculated in the successive iteration using the adaptive rate technique. The one or more rate parameters are the inertia weight, the local learning rate and the global learning rate. [075] At step 210i, the steps 210e through 210h are repeated by 30 considering the successive iteration as the initial iteration and the next position as 25 the initial position. The repletion is continued until either a predefined number of iterations are completed, or a global minimum of the multi-objective optimization function is met. After this convergence, a final position of the global best particle is obtained. [076] At step 210j, the optimal task workflow of the application for 5 deployment in the multi-cloud serverless environment, is generated based on the final position of the global best particle. Using the final position of the global best particle, the optimal mapping of each task of the plurality of tasks with the associated serverless computing instance of the plurality of serverless computing instances, and the associated storage service of the plurality of storage services, is 10 obtained to generate the optimal task workflow. [077] Further, during the generation of the optimal task workflow, if a change in the value of the multi-objective optimization function of the global best particle after a predefined number of iterations is less than a predefined threshold, then each particle of the plurality of particles are perturbated such that each particle 15 moves from an associated current position to the next position. [078] Perturbation in PSO: One of the well-known challenges in the PSO algorithm is to distinguish between local and global minima when it shows minor or no improvements in the multi-objective optimization function (generally called as fitness function in the PSO) for the adjacent iterations. Many strategies have been 20 proposed in the art to overcome this limitation. One solution is to use perturbation of particles when no improvement is observed in the global best for a certain number of iterations. The perturbation approach involves changing the learning rates 𝑟𝑟𝑙𝑙𝑖𝑖 and 𝑟𝑟𝑔𝑔𝑖𝑖 called cognition and social parameters. This procedure is repeated twice assuming that it is sufficient for the particles to escape local minima and attain 25 the global maximum. Before applying the perturbation, the current global and local best is also saved in case global minima has already been reached. [079] Adaptive learning rates: In the PSO, the optimal solution is reached by balancing both local and global searches, for which the learning rates play a major role. It needs to be tuned optimally for balanced exploration and exploitation. 30 The optimization of learning rates has been studied extensively by researchers and 26 many approaches are suggested. In the present disclosure, the local (𝑟𝑟𝑙𝑙𝑖𝑖(𝑡𝑡)) and the global learning rates (𝑟𝑟𝑔𝑔𝑖𝑖 (𝑡𝑡)) for each iteration 𝑡𝑡 are defined as: 𝑟𝑟𝑙𝑙𝑖𝑖(𝑡𝑡)=𝑟𝑟𝑙𝑙𝑚𝑚𝑚 𝑚 −𝑟𝑟𝑙𝑙𝑚𝑚𝑚 𝑡𝑡𝑁𝑁+𝑟𝑟𝑙𝑙𝑚𝑚𝑚 𝑟𝑟𝑔𝑔𝑖𝑖(𝑡𝑡)=𝑟𝑟𝑔𝑔𝑚𝑚𝑚 −𝑟𝑟𝑔𝑔𝑚𝑚𝑚 𝑚 𝑡𝑡𝑁𝑁+𝑟𝑟𝑔𝑔𝑚𝑚𝑚 𝑚 wherein 𝑁𝑁 is the maximum number of iterations for which the PSO algorithm is 5 executed. The 𝑟𝑟𝑙𝑙𝑚𝑚𝑚 𝑚 and 𝑟𝑟𝑙𝑙𝑚𝑚𝑚 are the min and max values for the local learning rates. Similarly, the 𝑟𝑟𝑔𝑔𝑚𝑚𝑚 and 𝑟𝑟𝑔𝑔𝑚𝑚𝑚 𝑚 are the min and max values for the global learning rates. [080] Adaptive Inertia weights: The inertia weight is another important parameter for maintaining a balance between local and global search. Many 10 strategies such as Simple Random Inertia Weight (SIRW), Chaotic Inertia Weight (CIW), Chaotic Descending Inertia Weight (CDIW) etc. have been proposed by researchers in the art for selecting inertia weight values. In the present disclosure, the CDIW approach is where inertia weight is calculated as follows: 𝑊𝑊(𝑡𝑡)=(𝑥𝑥1−𝑥𝑥2)×􀵬𝑁𝑁−𝑡𝑡𝑁𝑁􀵰+(𝑥𝑥2×𝑧𝑧) 15 where 𝑥𝑥1 and 𝑥𝑥2 represent initial (max) and final (min) inertia weights while 𝑧𝑧 is a random value chosen between 0 and 1. 𝑁𝑁 represents the maximum iterations and 𝑡𝑡 is the current iteration. [081] Implementation details of the PSO algorithm in the present disclosure is explained in the below pseudo code: 20 Input: Set of tasks in DAG, their compute requirement (in MI) and Input/output data sizes, Set of serverless instances with their configuration (MIPS), bandwidth details (MB/s) and the cost model. Output: Optimal mapping of tasks in the DAG to serverless computing instances 1. No of particles = 100, 𝑁𝑁 = 350, 𝑟𝑟𝑙𝑙𝑚𝑚𝑚 𝑚 = 𝑟𝑟𝑔𝑔𝑚𝑚𝑚 𝑚 =0.5, 𝑟𝑟𝑙𝑙𝑚𝑚𝑚 = 𝑟𝑟𝑔𝑔𝑚𝑚𝑚 = 1.5, 25 𝑥𝑥1 = 0.9, 𝑥𝑥2 = 0.4 2. while 𝑡𝑡<𝑁𝑁 do 3. Find inertia weight 𝑊𝑊 using 𝑥𝑥1 and 𝑥𝑥2 4. Find local and global learning rates 𝑟𝑟𝑙𝑙𝑖𝑖and 𝑔𝑔𝑙𝑙𝑖𝑖 27 5. for Each particle 𝑖𝑖 do 6. Update the velocity of the particle 7. Update the position of the particle 8. Evaluate the fitness of the particle at the current position (𝑐𝑐𝑐 𝑐 𝑐 𝑝 𝑝 𝑝 ). 5 𝐹𝐹𝑐𝑐=𝑤𝑤1∗𝐸𝐸𝑡𝑡+𝑤𝑤2∗𝐶𝐶 9. The best fitness for a particle and the global best fitness is evaluated and their corresponding positions are noted as follows: /* Update personal best as */ 10. if 𝐹𝐹𝑐𝑐􀵫𝑐𝑐𝑐 𝑐 𝑐 𝑝 𝑝 𝑝 􀵯

Documents

Application Documents

# Name Date
1 202321058297-STATEMENT OF UNDERTAKING (FORM 3) [30-08-2023(online)].pdf 2023-08-30
2 202321058297-REQUEST FOR EXAMINATION (FORM-18) [30-08-2023(online)].pdf 2023-08-30
3 202321058297-FORM 18 [30-08-2023(online)].pdf 2023-08-30
4 202321058297-FORM 1 [30-08-2023(online)].pdf 2023-08-30
5 202321058297-FIGURE OF ABSTRACT [30-08-2023(online)].pdf 2023-08-30
6 202321058297-DRAWINGS [30-08-2023(online)].pdf 2023-08-30
7 202321058297-DECLARATION OF INVENTORSHIP (FORM 5) [30-08-2023(online)].pdf 2023-08-30
8 202321058297-COMPLETE SPECIFICATION [30-08-2023(online)].pdf 2023-08-30
9 202321058297-FORM-26 [29-09-2023(online)].pdf 2023-09-29
10 202321058297-Proof of Right [18-12-2023(online)].pdf 2023-12-18
11 Abstract.1.jpg 2024-01-18
12 202321058297-Power of Attorney [09-08-2024(online)].pdf 2024-08-09
13 202321058297-Form 1 (Submitted on date of filing) [09-08-2024(online)].pdf 2024-08-09
14 202321058297-Covering Letter [09-08-2024(online)].pdf 2024-08-09
15 202321058297-CORRESPONDENCE(IPO)-(WIPO DAS)-13-08-2024.pdf 2024-08-13
16 202321058297-FORM 3 [06-11-2024(online)].pdf 2024-11-06
17 202321058297-FORM-26 [07-11-2025(online)].pdf 2025-11-07