Sign In to Follow Application
View All Documents & Correspondence

A System And Method For Identifying Optimal Workflow At Runtime

Abstract: The present disclosure relates generally to a system and method for identifying optimal workflow pattern during runtime of a task. In a task executable environment having an end to end automation, identifying an optimal workflow pattern is crucial. To identify suitable workflow pattern for complex tasks in dynamic work environments during runtime consideration of QoS parameters is important to analyze the high-level workflow specification is in complaint with SLA thereby increasing end to end latency. Here, decomposed high-level workflow specification is executed on each workflow pattern to obtain a first set of QoS parameters, wherein the first set of QoS parameters is analyzed to identify optimal workflow pattern in complaint with SLA. Further, a second set of QoS parameters is obtained for each low-level specification in accordance with workflow pattern, if the obtained first set of QoS parameters is non-compliant with SLA for identifying optimal workflow pattern.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
01 December 2017
Publication Number
28/2019
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
ip@legasis.in
Parent Application
Patent Number
Legal Status
Grant Date
2024-04-29
Renewal Date

Applicants

Tata Consultancy Services Limited
Nirmal Building, 9th Floor, Nariman Point, Mumbai 400021, Maharashtra, India

Inventors

1. KATTEPUR, Ajay
Tata Consultancy Services Limited, Gopalan Global Axis, H Block, Hoody Village Whitefield, Bangalore - 560 072, Karnataka, India
2. MUKHERJEE, Arijit
Tata Consultancy Services Limited, Ecospace Business Park, Block-1B, New Town, Rajarhat, N. 24 Parganas, Kolkata - 700 156, West Bengal, India
3. PURUSHOTHAMAN, Balamuralidhar
Tata Consultancy Services Limited, Gopalan Global Axis, H Block, Hoody Village Whitefield, Bangalore - 560 072, Karnataka, India
4. RATH, Hemant Kumar
Tata Consultancy Services Limited, Kalinga Park, IT/ITES Special Economic Zone, Plot - 35, Chandaka Industrial Estate, Patia, Chandrasekharpur, Bhubaneswar - 751 024, Odisha, India

Specification

DESC:FORM 2

THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003

COMPLETE SPECIFICATION
(See Section 10 and Rule 13)

Title of invention:

SYSTEM AND METHOD FOR IDENTIFYING OPTIMAL WORKFLOW PATTERN DURING RUNTIME OF A TASK

Applicant

Tata Consultancy Services Limited
A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th floor,
Nariman point, Mumbai 400021,
Maharashtra, India

The following specification particularly describes the invention and the manner in which it is to be performed.

CROSS-REFERENCE TO RELATED APPLICATIONS AND PRIORITY
The present application claims priority from Indian patent application no. (201721043231), filed on 01st Dec, 2017 the complete disclosure of which, in its entirety is herein incorporated by reference.
TECHNICAL FIELD
The disclosure herein generally relates to workflow management, and, more particularly, to system and method for identifying optimal workflow pattern during runtime of a task in a work environment.
BACKGROUND
Workflow pattern is widely utilized in various applications of Industry 4.0 for abstraction of high-level requirements or business needs in supply-chain, manufacturing and business process communities. Robot assisted warehouses is an example work environment that needs to be compliant with Industry 4.0 environment. The robot assisted warehouses involves robotic processes, hardware and human agents that are operative in independent modes for completion of complex tasks. Performing the complex tasks efficiently, requires proper scheduling, optimization and planning so as to handle concurrent actions and manage coordination during runtime of the task. Specifically, in complex industrial workflows that involves invoking of multiple modular components affects end-to-end Quality of Service (QoS) and latency, when an executable workflow pattern for the task is non-compliant with service level agreement (SLA). This occurs due to the dynamic configurations of the task during runtime that cannot be envisioned prior task initiation.
Most of the conventional methods provides task automation in work environments for handling forklifts tasks in robotics platform. The agents associated in the system executes task based on the determined workflow pattern in static condition. Such statically determined workflow pattern effectively reduces QoS behaviour and latency of the performed task. However, these conventional methods limits in identifying suitable workflow pattern for autonomous robots that perform complex tasks in dynamic work environment. Also, in such work environment, interactions between robotics processes are crucial, identifying optimal workflow pattern during runtime based on the QoS parameters efficiently increases QoS and latency in accordance with SLA.
In an existing system, which provides virtual warehouse management for logistics have a visual warehouse databases for stock location for determining the location of task to be performed. However, these workflow patterns are not identified during runtime. The statically determined workflow pattern do not considering runtime configurations for dynamic complex task and effectively affecting QoS parameters. However, the system limits in identifying optimal workflow pattern for complex tasks in dynamically deployed environment based on QoS parameters during runtime. The identified optimal workflow pattern increases QoS and latency in accordance with SLA.
SUMMARY
Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems. For example, in one embodiment, a method for identifying optimal workflow pattern during runtime for a task is provided. The method includes receiving a high-level workflow specification as input from one or more users. The received high-level workflow specification is decomposed into a plurality of low-level specifications. Further, each low-level specification among the plurality of low-level workflow specifications is executed on each workflow pattern in accordance with a plurality of workflow patterns, wherein the plurality of workflow patterns are generic predefined workflow patterns. Further, for each low-level workflow specification, atleast one of a first set of Quality of Service (QoS) parameters is obtained using parameter extraction technique and a second set of QoS parameters is obtained using runtime optimal binding technique by executing each low-level workflow specification in accordance with each workflow pattern among the plurality of workflow patterns, wherein the second set of QoS parameters are obtained if the first set of QoS parameters are non-compliant with a Service Level Agreement (SLA). Furthermore, optimal workflow pattern is identified for each low-level workflow specification based either one of the first set of QoS parameters or the second set of QoS parameters.
In another aspect, a workflow optimization system for identifying optimal workflow pattern is provided. The system comprises receiving a high-level workflow specification as input from one or more users. The received high-level workflow specification is decomposed into a plurality of low-level specifications. Further, each low-level specification among the plurality of low-level workflow specifications is executed on each workflow pattern in accordance with a plurality of workflow patterns, wherein the plurality of workflow patterns are generic predefined workflow patterns. Further, for each low-level workflow specification, atleast one of a first set of Quality of Service (QoS) parameters is obtained using parameter extraction technique and a second set of QoS parameters is obtained using runtime optimal binding technique by executing each low-level workflow specification in accordance with each workflow pattern among the plurality of workflow patterns, wherein the second set of QoS parameters are obtained if the first set of QoS parameters are non-compliant with a Service Level Agreement (SLA). Furthermore, optimal workflow pattern is identified for each low-level workflow specification based either one of the first set of QoS parameters or the second set of QoS parameters.
In yet another aspect, provides one or more non-transitory machine readable information storage mediums comprising one or more instructions, which when executed by one or more hardware processors perform actions includes receiving a high-level workflow specification as input from one or more users. The received high-level workflow specification is decomposed into a plurality of low-level specifications. Further, each low-level specification among the plurality of low-level workflow specifications is executed on each workflow pattern in accordance with a plurality of workflow patterns, wherein the plurality of workflow patterns are generic predefined workflow patterns. Further, for each low-level workflow specification, atleast one of a first set of Quality of Service (QoS) parameters is obtained using parameter extraction technique and a second set of QoS parameters is obtained using runtime optimal binding technique by executing each low-level workflow specification in accordance with each workflow pattern among the plurality of workflow patterns, wherein the second set of QoS parameters are obtained if the first set of QoS parameters are non-compliant with a Service Level Agreement (SLA). Furthermore, optimal workflow pattern is identified for each low-level workflow specification based either one of the first set of QoS parameters or the second set of QoS parameters.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles:
FIG. 1 illustrates an exemplary work environment implementing a workflow optimization system to identify optimal workflow pattern for a high-level workflow specification corresponding to a task, according to some embodiments of the present disclosure.
FIG. 2 is a functional block diagram of the workflow optimization system of FIG.1, according to some embodiments of the present disclosure.
FIG.3A illustrates a high-level design architecture of the workflow optimization system to identify optimal workflow pattern for the high-level workflow specification during runtime corresponding to the task, according to some embodiment of the present disclosure.
FIG.3B is an example illustrating design and runtime configurations for identifying optimal workflow pattern for the high-level workflow specification during runtime corresponding to the task in a robotics warehouse environment, in accordance with an example embodiment of the present disclosure.
FIG.4 illustrates an example flow diagram depicting a method for identifying optimal workflow pattern for the high-level workflow specification during runtime, according to some embodiment of the present disclosure.
FIG.5A illustrates an example workflow pattern identified using the workflow optimization system to be executed in the robotics warehouse environment, according to some embodiment of the present disclosure.
FIG.5B illustrates an example robotic warehouse environment for robotic workflow execution of the high-level workflow specification for task-division and synchronization as depicted in FIG.3A, according to some embodiment of the present disclosure.
FIG.6 illustrates graphical representation of the workflow optimization system based on QoS cumulative distribution values and automation procurement latency for the high-level workflow, according to some embodiment of the present disclosure.
FIG.7 illustrates a graphical representation of varying invocation workflow patterns and cost function values executed in the robotic warehouse environment for the high-level workflow specification, in accordance with the present disclosure.

DETAILED DESCRIPTION OF EMBODIMENTS
Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the spirit and scope of the disclosed embodiments. It is intended that the following detailed description be considered as exemplary only, with the true scope and spirit being indicated by the following claims.
Embodiments herein provides a system 102 and method for identifying optimal workflow during runtime of a task. The system, alternatively referred as workflow optimization system 102, enables to identify optimal workflow pattern for a high-level workflow specification corresponding to a task received from one or more users. The optimal workflow pattern is utilized in complex industrial workflows, where multi modular components and robotic processes are invoked concurrently thereby increasing end to end QoS behaviour and latency. The workflow optimization system 102 receives the high-level workflow specification corresponding to the task from one or more users. The received high-level workflow specification is decomposed into a plurality of low-level workflow specifications. Each low-level workflow specification among the plurality of low-level workflow specifications is executed on each workflow pattern in accordance with a plurality of workflow patterns, The plurality of workflow patterns are generic predefined workflow patterns and may include a single site workflow pattern, sequential workflow pattern, parallel workflow pattern, pruning or fastest workflow pattern, follower workflow pattern and the like. Further, the system 102 obtains a first set of Quality of Service (QoS) parameters for each low-level workflow specification among the plurality of low-level workflow specifications in accordance with each workflow pattern using parameter extraction technique. In this technique, random variables are obtained from latency distributions for each low-level workflow specification corresponding to each workflow pattern among the plurality of workflow patterns. The latency distributions are predefined in the system 102 based on exponential time completion corresponding to each modular components involved in the task. Further, the system 102 obtains a first set of QoS parameters for each low-level workflow specification among the plurality of low-level workflow specifications in accordance with each workflow pattern among the plurality of workflow patterns based on the obtained random variables and QoS incremental rules. The QoS incremental rules provides values for each workflow patterns corresponding to latency QoS values and cost QoS values. Further, the first set of QoS parameters are utilized to analyse the workflow pattern corresponding to each low-level specification to check if is in compliant with SLA, then an optimal workflow pattern is identified corresponding to the plurality of low-level workflow specifications. Further, if the obtained first set of QoS parameters is non complaint with SLA, then a second set of QoS parameters are obtained for each low-level specification during runtime using runtime optimal binding technique. In this technique, a parallel workflow pattern from the plurality of workflow patterns is executed on each low-level workflow specification for performing the task parallel in multiple sites. Further, the random variables are obtained from latency distributions for each low-level workflow specification corresponding to each workflow pattern among the plurality of workflow patterns. The latency distributions are predefined in the system 102 based on exponential time completion corresponding to each modular components involved in the task. Further, the system 102 obtains a second set of QoS parameters for each low-level workflow specification among the plurality of low-level workflow specifications in accordance with each workflow pattern among the plurality of workflow patterns based on the obtained random variables and QoS incremental rules. Further, the second set of QoS parameters are utilized to analyse the workflow pattern corresponding to each low-level specification is in compliant with SLA, then an optimal workflow pattern is selected corresponding to the plurality of low-level workflow specifications. A detailed description of the above-described system for identifying an optimal workflow runtime is shown with respect to illustrations represented with reference to FIGS.1 -7.
FIG.1 illustrates an exemplary work environment implementing a workflow optimization system to identify an optimal workflow pattern for a high-level workflow specification corresponding to a task, according to some embodiments of the present disclosure. The work environment 100 comprises the workflow optimization system 102, implemented in a computing device 104 that receives the high-level workflow specification corresponding to the task from the user. The workflow optimization system 102 may be externally coupled (as shown in FIG.1) to the computing device 104 or may be internal (not shown) to the computing device 104. Although the present disclosure is explained considering that the workflow optimization system 102 is implemented on a server, it may be understood that the workflow optimization system 102 may also be implemented in a variety of computing systems, such as a laptop computer, a desktop computer, a notebook, a workstation, a cloud-based computing environment and the like. In one implementation, the workflow optimization system 102 may be implemented in a cloud-based environment. It will be understood that the workflow optimization system 102 may be accessed by multiple users through one or more user devices 106-1, 106-2... 106-N, collectively referred to as user devices 106 hereinafter, or applications residing on the user devices 106. Examples of the user devices 106 may include, but are not limited to, a portable computer, a personal digital assistant, a handheld device, a Smartphone, a Tablet Computer, a workstation and the like. The user devices 106 are communicatively coupled to the system 102 through a network 108. The high-level workflow specification corresponding to the task may be received from the users through the user devices 106 and stored in a repository 112. The repository 112 may be external to the workflow optimization system 102 or internal to the workflow optimization system 102 (as shown in FIG.2).
In an embodiment, the network 108, transmits the high-level workflow specification to the computing device 104, may be a wireless or a wired network, or a combination thereof. In an example, the network 108 can be implemented as a computer network, as one of the different types of networks, such as virtual private network (VPN), intranet, local area network (LAN), wide area network (WAN), the internet, and such. The network 106 may be either a dedicated network or a shared network, which represents an association of the different types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), and Wireless Application Protocol (WAP), to communicate with each other. Further, the network 108 may include a variety of network devices, including routers, bridges, servers, computing devices, storage devices. The network devices within the network 108 may interact with the system 102 through communication links.
FIG. 2 is a functional block diagram of the workflow optimization system of FIG.1, according to some embodiments of the present disclosure. In an example embodiment, the system 102 may be embodied in, or is in direct communication with the system, for example the system 100 (FIG. 1). The system 102 includes or is otherwise in communication with one or more processor 202 (hardware processor), a system bus such as a system bus 206 or alike mechanism may couple the memory 208, and the I/O interface 204. The memory 208 further may include modules 210. The I/O interface 204 of the workflow optimization system 102 may include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like. The interfaces 204 may include a variety of software and hardware interfaces, for example, interfaces for peripheral device(s), such as a keyboard, a mouse, an external memory, a camera device, and a printer. Further, the interfaces 204 may enable the system 102 to communicate with other devices, such as web servers and external databases. The interfaces 204 can facilitate multiple communications within a wide variety of networks and protocol types, including wired networks, for example, local area network (LAN), cable, etc., and wireless networks, such as Wireless LAN (WLAN), cellular, or satellite. For the purpose, the interfaces 204 may include one or more ports for connecting a number of computing systems with one another or to another server computer. The I/O interface 204 may include one or more ports for connecting a number of devices to one another or to another server.
The hardware processor 202 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the hardware processor 202 is configured to fetch and execute computer-readable instructions stored in the memory 208. The memory 208 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. In an embodiment, the memory 208 includes a plurality of modules 210 and a repository 212 for storing data processed, received, and generated by one or more of the modules 210. The modules 210 may include routines, programs, objects, components, data structures, and so on, which perform particular tasks or implement particular abstract data types. The repository 212, amongst other things, includes a system database and other data. The other data may include data generated as a result of the execution of one or more modules in the modules 210. The repository 212 is further configured to maintain the high-level workflow specifications received from the user.
The modules 210 can be an Integrated Circuit (IC), external to the memory 208 (not shown), implemented using a Field-Programmable Gate Array (FPGA) or an Application-Specific Integrated Circuit (ASIC). The names of the modules of functional block within the modules 210 referred herein, are used for explanation and are not a limitation. Further, the memory 208 can also include the repository 112 (internal to the workflow optimization system 102 as shown in FIG. 2). The modules 210 may include computer-readable instructions that supplement applications or functions performed by the workflow optimization system 102. The repository 212 may store data that is processed, received, or generated because of the execution of one or more modules in the module(s) 210.The modules 210 includes a decomposition module 214, a QoS parameter generation module 216 and a workflow pattern identification module 218. The decomposition module 214 of the workflow optimization system 102 can be configured to decompose the received high-level workflow specification into the plurality of low-level workflow specifications wherein, each low-level workflow specification can be executed on each workflow pattern among the plurality of workflow patterns. The QoS parameter generation module 216 of the workflow optimization system 102 can be configured to obtain the first set of QoS parameters and the second set of QoS parameters for each low-level workflow specification in accordance with the workflow pattern. The workflow pattern module 218 can be configured for identifying an optimal workflow pattern for the received high-level workflow specification based on either first set of QoS parameters or the second set of QoS parameters. The modules 210 are further explained in conjunction with FIG. 3 and FIG. 4.
FIG.3A illustrates a high-level design architecture of the workflow optimization system to identify optimal workflow pattern for the high-level workflow specification during runtime corresponding to the task, according to some embodiment of the present disclosure. Considering an example application of robotic warehouse environment of the workflow optimization system 102 in conjunction with FIG.5A where, the robotics warehouse environment is configured with an inventory management module, an order processing module and a task assignment module. The present disclosure may be extended to various warehouse applications where complex tasks are executable in one or more modular components. The robotics warehouse environment 100 includes hundreds of humans and robots on the shop floor for executing complex workflows. The robotics warehouse environment 100 performs activities such as breaking pellets into manageable units, labeling and packaging of goods. The workflow optimization system 102 can be configured to receive the high-level workflow specification corresponding to the task from the user. The high-level workflow specification is a requirement for multi-robot warehouse coordination and execution in the robotics warehouse environment 100. The workflow optimization system 102 can be configured to decompose the received high-level workflow specification into the plurality of low-level workflow specifications. The received high-level workflow specification is further sub-divided and assigned to various autonomous robotic and or software resources sequentially. The execution of robotic components using multiple workflow pattern during runtime for the high-level workflow specification where, robotics or machine components and tasks is allocated to such agents that are capable of sensing, querying knowledge base and performing actions related to a specific goal.
FIG.3B is an example illustrating design and runtime configurations for identifying optimal workflow pattern for the high-level workflow specification during runtime corresponding to the task in a robotics warehouse environment, in accordance with an example embodiment of the present disclosure. Considering an example application of robotic warehouse environment in conjunction with FIG.5A and 5B of the workflow optimization system 102 where late-runtime binding of services in the Service Oriented Architecture (SOA) by interconnecting with autonomous robotic components that are typically observed in Industry 4.0 industrial deployments. In the design-time specification phase the workflow optimization system 102 receives the high-level workflow specification and business requirement as a module input that invokes a robotic automation and Orc specification that integrates into the workflow pattern. Furthermore, the design time specification may be deployed during optimal runtime binding. In the runtime deployment phase, a plurality of software services components, a plurality of robotic process automation and a plurality of SLA compliance may invoke a plurality of autonomous robotic agents that may be reconfiguring scale up or down. The high-level design architecture mainly focusses on identifying optimal workflow pattern that optimizes the output QoS values wherein, invoking an order delivery robotic service sites with the procurement service obtained utilizing the workflow pattern among the plurality of workflow patterns. The system optimizes over the invocations or combinators such as sequential or parallel, in order to improve runtime execution QoS values for each low-level workflow specification in accordance with the workflow pattern. The number of sites that may be bound within the combinators constraints both from the workflow patterns and from the underlying resources. Furthermore, the service or robotic agent may be reusable set of deployable functions with input or output data considering the constraints on the data invocation patterns as well as the hardware resources such as memory, CPU, disk and thereof.
FIG.4 illustrates an example flow diagram depicting a method for identifying optimal workflow pattern for the high-level workflow specification during runtime, according to some embodiment of the present disclosure. The method 400 is explained in with example scenario of FIG. 5A, FIG. 5B and in conjunction with FIG.3B.In an embodiment, the system 102 comprises one or more data storage devices or the memory 208 operatively coupled to the one or more hardware processors 202 and is configured to store instructions for execution of steps of the method 400 by the one or more processors 202 in conjunction with various modules of the modules 210 such as the decomposition module 214, QoS parameter generation module 216 and the workflow pattern identification module 218. The steps of the method 400 of the present disclosure will now be explained with reference to the components of the system 102 as depicted in FIG. 1 and FIG. 2 and the steps of flow diagram as depicted in FIG. 4. Although process steps, method steps, techniques or the like may be described in a sequential order, such processes, methods and techniques may be configured to work in alternate orders. In other words, any sequence or order of steps that may be described does not necessarily indicate a requirement that the steps be performed in that order. The steps of processes described herein may be performed in any order practical. Further, some steps may be performed simultaneously.
At step 402 of the method 400, the one or more hardware processors 202 in conjunction with the decomposition module 214 are configured to receive, the high-level workflow specifications. At step 404 of the method 400, the one or more hardware processors 202 in conjunction with the decomposition module 214 are configured to decompose the received high-level workflow specification into the plurality of low-level workflow specifications.
At step 406 of the method 400, the one or more hardware processors 202 in conjunction with the QoS parameter generation module 216 are configured to execute the plurality of workflow patterns on each low-level specifications during runtime, wherein the plurality of workflow patterns are generic predefined workflow patterns. The workflow patterns include parallel combinator, sequential combinator, pruning or fastest combinator and otherwise combinator or followers. The parallel combinator (|), given two sites s_1 and s_2 , the expression s_1 | s_2 invokes both sites in parallel. The sites execute independently and the output published can be any of the outputs s_1 or s_2. The sequential combinator (>>), given two sites s_1 and s_2, the expression s_1» s_2 , site s_1 is evaluated initially. For every value published by s_1 initiates a separate execution of site s_2 with publications bound to any execution of s_2. The pruning or fastest combinator (<<) given two sites s_1 and s_2, the expression s_1«s_2 , both sites s_1 and s_2 executed in parallel. If s_2 publishes a value, the execution of s_2 is terminated and the suspended parts of s_1 proceeds, wherein the mechanism to block or terminate computations. The otherwise combinator (;) given two sites s_1 and s_2, the expression s_1;s_2 , both sites s_1 and s_2 executes site s_1. If s_1 publishes no value and halts, then s_2 is executed instead. Halting, if all site calls that never publishes any more values or will not call any more sites. As depicted in Table 1 , the orc sites and combinators for all workflow patterns.
Orc Workflow Pattern
s_1» s_2 Sequence, Milestone
s_1 | s_2 Parallel Split, Multiple Instances
?v«(s?_1 | s_2) Exclusive choice, Synchronising Merge
s_1 ; s_2 Implicit Termination, Cancel Activity/Case
(s_1 ,s_2 )
Suynchronization
s_1 | s_2»v Simple Merge, Multi-Choice, Multi-Merge
s_1»s_2»s_1 Arbitrary Cycles
Ift Deferred Choice, Discriminator
Lock Interleaved Parallel Routing
Table 1 – Workflow patterns represented in Orc
At step 408 includes obtaining, by the one or more hardware processors, the one or more obtaining the first set of QoS parameters for each low-level workflow specification in accordance with workflow pattern using parameter execution technique. The parameter extraction technique is utilized to obtain, the first set of Quality of Service (QoS) parameters are obtained for each low-level workflow specification corresponding to the workflow pattern. This technique provides the Monte-carlo execution run in order to obtain the end-to-end QoS of a workflow pattern given a runtime binding and random variables are drawn from latency distributions of the encapsulated sites. Further, based on the latency increamental rules the latency QoS values and cost QoS values are obtained for each low-level specification corresponding to the workflow pattern. The parameter extraction technique comprises the following steps,
Step 1 : Select a workflow configuration + runtime binding of software / robotic agents
for i=1 to 100000
Step 2: Draw random variables from distribution of each encapsulated function
mean µ_1, µ_2 … µ_n
Step 3: Derive end-to-end QoS based on rules in Table 1
end
Step 4: Plot end-to-end latency QoS CDF
Step 5: Plot end-to-end cost QoS

As represented in Table 2, the mapping of QoS increments represents inlined with various invocation patterns. The QoS values are random variables from a distribution for obtaining the end-to-end QoS for complex workflow high-level workflow specification with multiple data flow constraints, concurrent invocations and failures. The incremental rules provides increment QoS counters. For latency QoS, when multiple sites are invoked and the waiting response for return of robotics, the max pattern in used. For single site sequential invocation workflow pattern, the QoS values are added (+). When choosing the pruning or fastest responding site, the max QoS value is selected. This is typically traded off with cost QoS values (invoking multiple sites leads to excess use of resources). The cost QoS values are also tracked using similar rules, with multiple invocations typically computed using the addition (+) operator.
Orc Workflow Pattern Latency QoS addition Cost QoS addition
Single site QoS Value QoS Value
Sequential>> + +
Parallel | Fork-Join (,) Max +
Pruning << (fastest) Best +
Otherwise;(follower) Max +
Table 2 - QoS Incremental output for various patterns
At step 410 includes analysing, by the one or more hardware processors, the one or more analysing the first set of QoS parameters for each low-level workflow specification is compliant with SLA. Here, the first set of QoS parameters for each low-level workflow specification among the plurality of the plurality of workflow patterns are analysed to obtain the low-level specification that is compliant to SLA.
At step 412 includes obtaining, by the one or more hardware processors, the one or more obtaining the second set of QoS parameters for each low-level workflow specification in accordance with workflow pattern using runtime optimal binding technique. This runtime optimal binding technique is utilized to obtain the second set of QoS parameters for each low-level workflow specification in accordance with the workflow pattern. Given a runtime binding, random variables are drawn from latency distributions of the encapsulated sites. The incremental rules are aggregated based on the rules provided in Table 1, providing the latency QoS CDF (cumulative distribution function) values for end-to-end workflow pattern may be further compared with the obtained QoS values. The following method may be represented as described below,
Measure end-to-end latency/cost CDF from Monte-Carlo runs
Step 1: while QoS CDF / values not meeting SLA do:
// This parallelizes all services/jobs in a naive way initially
Step 2: replace sequential invocation >> with parallel | when no input data constraints
Step 3: // This can be used to scale up/out services when resource constraints are met on a single service or robot agent
Step4: Iteratively double the number of parallel invocations of encapsulated site with | and (,) patterns
Step 5:This handles alternative services when faults occur within the distributed environment
if failure happens replace pattern with ; pattern
When multiple sites perform the same function, late runtime binding may be performed
Select best performing sites with pruning << pattern
Re-run the Monte-Carlo run with rules for the selected patterns
end
if no viable configuration available
return no viable configuration satisfying SLA
else
return optimal workflow configuration with QoS outputs
The end-to-end latency CDF values and cost values are obtained based on the incremental rules as described below and as represented below in Table 3,

Table 3 – Varied workflow invocation patterns
For replacing sequential invocations with parallel, the workflow that may not require any input constraints. Further, a large task is divided into sub tasks and assigning it parallel to multiple sites and iteratively doubling the available sites/resources for scaling up the services, the late runtime binding allows one to invoke multiple sites with similar functionalities and then choosing the best returning service. However, this may trade-off with higher cost values and for handling failures by broadcasting halts over Orc. This rule may automatically invoke alternative services to take over the functionality. Each configuration produced may be used to re-run the Monte-Carlo algorithm to produce the QoS output. If any of the represented configurations can satisfy the service level agreements, they are chosen as the optimal configuration. It is also simple to extend this to an exhaustive search process wherein all possible configurations are evaluated and the maximum QoS distribution chosen.
In an embodiment, the latency incremental rules and timeouts are included for latency QoS sites as provided in Table 4, linked with the workflow patterns among the plurality of workflow patterns integrated into the complex compositions.

Table 4– Latency specifications in Orc
The latency computations in fork-join operations may be done by making use of Latency.join (t1,t2) and for selecting the fastest returning computation as Latency.best (t1,t2). An operator called Latency.increment (t1) provides to track latency increments during the computation of complex expressions.

At step 414 includes determining, by the one or more hardware processors, the one or more determining if the first set of QoS parameters for each low-level workflow specification is compliant with SLA. The workflow optimization system 102 can be configured to identify, an optimal workflow pattern from the plurality of workflow patterns for each low-level workflow specification based on one of the first set of QoS parameters, when the first set of QoS parameters are compliant with the SLA and the second set of QoS parameters, when the first set of QoS parameters are non-compliant and the second set of QoS parameters are compliant with the SLA, wherein the identified optimal workflow pattern has the maximum value of QoS parameters.

At step 416 includes analysing, by the one or more hardware processors, the one or more analysing the second set of QoS parameters for each low-level workflow specification is compliant with SLA.At step 420 includes identifying, by the one or more hardware processors, the one or more identifying an optimal workflow pattern from the plurality of workflow patterns for each low-level workflow specification based on one of the set of QoS parameters.
FIG.5A illustrates an example workflow pattern identified using the workflow optimization system to be executed in the robotics warehouse environment, according to some embodiment of the present disclosure and FIG.5B illustrates an example robotic warehouse environment for robotic workflow execution of the high-level workflow specification for task-division and synchronization as depicted in FIG.3A, according to some embodiment of the present disclosure. Considering the warehouse workflow execution model in conjunction with FIG.3A the complex tasks may be sequentially in parallel using the fastest invocation workflow pattern. The workflow composition receiving a high-level workflow specification depicting each modular components implementing warehouse operations that includes procurement, delivery and storage as represented in Table 4A and 4B as depicted below. In the procurement phase, the re-usable site definition of the procurement workflow consists of functions to receivepallet, breakpallet, SKUpick and arrange SKU. This can be instantiated multiple times to receive products based on a ProdID, breakdown pallets from pallet size to SKU and arrange them for storage.

Table 4A – High-level workflow specification utilizing various workflow patterns for procurement and delivery site

Table 4B – High-level workflow specification utilizing various workflow patterns for storage site
The dictionary () site creates mutable maps from field names to values that may be extended. Latency incorporated into each of the sites may be specified using Rwait values. Assignment and retrieval from mutable references are doing using := and ? patterns. In the delivery phase, the site definition of the delivery workflow pattern consists of functions to ReceiveOrder, Pick, Check, Pack and Ship. Each step provides the input to the proceeding step, for instance, with parameters such as checkstatus and packing. The definitions interact with the storage workflow using the storage. RetrieveStock site call to retrieve products. In the storage phase, site serves as the intermediary to the delivery and procurement workflows. The storage in the WarehouseStorage area and the ForwardStorage area. The current stock value is tracked in the Stockvalue site based on procurement and delivery site RetrieveStock that procures smaller orders from the ForwardStorage areavand larger orders (with additional delay) from the WarehouseStorage.

In an embodiment, the workflow optimization system 102 may perform decomposing the received high-level specification such as breaking pellets into manageable units, labelling and packing of goods for any complex workflow into low-level specifications. Here, the workflow optimization system 102 may include a procurement phase, a storage phase and a delivery phase. During procurement phase, one or more robots may receive a notification for arrived goods to be processed using the workflow optimization system 102. Here, the one or more robots initially checks for order integrity and damages. The products typically arrived are in large pallets that may be broken down to smaller units to prepare for storage. The broken down Stock Keeping Units (SKU’s) may then be tagged using any electronic devices such as radio frequency identification (RFID) and thereof before readied for storage. During storage phase, the one or more automatic robotic pickers are utilized for picking and storage of SKU’s. Further, these one more robots may be used as forklifts robots, picker robots, and delivery robots and thereof for executing a workflow task. The one or more delivery robots may also be used for handling large storage area consisting of multiple racks. Here, a subset of stored products are moved to a forward area to ensure quicker delivery of products. During Delivery phase, the one or more delivery robots may receive one or more ordered products been procured from the warehouse. Typically, this process is handled by a centralized orchestrator wherein, determines the number of pickers and time limits needed to procure products from the warehouse. Once all the required products are procurred, those products are collectively collated and checked. Finally, packing and shipping of the product is done for order fulfilment.
FIG.6 illustrates graphical representation of the workflow optimization system 102 based on QoS cumulative distribution values and automation procurement latency for the high-level workflow, according to some embodiment of the present disclosure. The received high-level workflow specification for various workflow patterns is captured for automation procurement latency and cumulative distributions. Considering, the delivery agent robots with exponential completion times having mean of 5 minutes. The cumulative density of distributions collected after 10, 000 Monte Carlo runs (using clock.time() in Orc) wherein, the parallel and fastest workflow specifications outrun the single workflow execution. The follower execution initially overlaps the single execution times, once the threshold of 5 minutes is exceeded, improving the sequential execution times. The percentile values have also been displayed with the 95??h percentiles provided with the worst being single robot tasks allocation having 30 minutes latency and the best being fastest 14.2 minutes (52.6% improvement), parallel 18.8 minutes (37.7% improvement) and sequential/follower 24 minutes (20% improvement).
FIG.7 illustrates a graphical representation of varying invocation workflow patterns and cost function values executed in the robotic warehouse environment for the high-level workflow specification, in accordance with the present disclosure. The simulation graph provides the cumulative distributions values for latency QoS and cost QoS when different workflow invocation patterns are used. However, the fastest and follower workflow patterns have increasing efficient latency distributions. The median or percentile values with stochastic dominance techniques, there is a trade off in the output cost values when invoking such patterns. Here, two interleaved complex workflows are called in parallel: ProcurementWorkflow() | deliveryWorkflow(). Each of these have site calls to multiple procurement, storage and delivery activities specified as robotic picker and delivery agent. While each site may be implemented on distributed agents, the experimental results implemented on the sites locally with exponentially distributed timing delays. Robotic delay parameters are obtained from specifications with latency increment rules. The output of a composite workflow simulation that various workflow activities proceeds in the specified order. The clock.time () and Latency.increment(t) sites, end-to-end procurement and delivery times may be tracked. This may be used to specify the Order Delivery SLAs. Delays in procurement provided by the picking and delivery agents are also tracked within this framework. This framework allows the warehouse activity planner to study and analyze activities at design time. Specially, in times with turbulent demand and supply rates, this may be combined with optimization frameworks to efficiently manage inventory supplies.
The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims.
The embodiments of present disclosure herein addresses unresolved problem of identifying optimal workflow pattern for complex tasks in the work environment. The embodiment, thus provides a method for identifying optimal workflow pattern for the complex tasks to be executed in work environment. Moreover, the embodiments herein provides unified framework to represent agents that includes software and robotic processes that work autonomously but are coordinated by the workflow composition. This workflow optimization system enables in extending scaling up of fine grained analysis of workflow specification on multiple systems. The identified workflow pattern for the complex task increases end to end QoS and latency in work environments.

It is to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g. any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g. hardware means like e.g. an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g. an ASIC and an FPGA, or at least one microprocessor and at least one memory with software modules located therein. Thus, the means can include both hardware means and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software means. Alternatively, the embodiments may be implemented on different hardware devices, e.g. using a plurality of CPUs.
The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various modules described herein may be implemented in other modules or combinations of other modules. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
It is intended that the disclosure and examples be considered as exemplary only, with a true scope and spirit of disclosed embodiments being indicated by the following claims.
,CLAIMS:1. A processor implemented method to identify optimal workflow pattern during runtime, wherein the method comprises:
receiving, a high-level workflow specification corresponding to a task from an user;
decomposing, the high-level workflow specification into a plurality of low-level workflow specifications;
executing, each low-level specification among the plurality of low-level workflow specifications in accordance with a plurality of workflow patterns, wherein the plurality of workflow patterns are generic predefined workflow patterns;
obtaining, for each low-level specification, atleast one of a first set of Quality of Service (QoS) parameters using parameter extraction technique and a second set of QoS parameters using runtime optimal binding technique by executing each low-level workflow specification in accordance with each workflow pattern among the plurality of workflow patterns, wherein the second set of QoS parameters are obtained if the first set of QoS parameters are non-compliant with a Service Level Agreement (SLA); and
identify, an optimal workflow pattern from the plurality of workflow patterns for each low-level workflow specification based on one of:
the first set of QoS parameters, when the first set of QoS parameters are compliant with the SLA; and
the second set of QoS parameters, when the first set of QoS parameters are non-compliant and the second set of QoS parameters are compliant with the SLA, wherein the identified optimal workflow pattern has the maximum value of QoS parameters;
2. The processor implemented method as claimed in claim1, wherein the first set of Quality of Service (QoS) parameters is obtained using parameter extraction technique, wherein the parameter extraction technique, comprises:
obtaining, random variables for each low-level workflow specification corresponding to each workflow pattern from latency distributions of each encapsulated function, wherein the latency distributions is obtained based on exponential completion time; and
obtaining, a first set of Quality of Service (QoS) parameters for each low-level workflow specification among the plurality of low-level workflow specifications in accordance with each workflow pattern among the plurality of workflow patterns based on the random variables and the Quality of Service (QoS) incremental rules;
3. The processor implemented method as claimed in claim 2, wherein the Quality of Service (QoS) incremental rules provides latency and cost values for corresponding workflow pattern;

4. The processor implemented method as claimed in claim 1, wherein the second set of Quality of Service (QoS) parameters is obtained during runtime using runtime optimal binding technique, wherein the runtime optimal binding technique comprises:
executing, a parallel workflow pattern from the plurality of workflow patterns on each low-level workflow specification, wherein the parallel workflow pattern enables each low-level workflow specification to be performed simultaneously in multiple sites;
obtaining, random variables for each low-level workflow specification corresponding to parallel workflow pattern from latency distributions of each encapsulated function; and
obtaining, a second set of Quality of Service (QoS) parameters of each low-level workflow specification corresponding to parallel workflow pattern based on the random variables and the Quality of Service (QoS) incremental rules;
5. An workflow optimization system (102), wherein the system comprises:
a processor (202);
an Input/output (I/O) interface (204); and
a memory (208) coupled to the processor (202), the memory (208) comprising:
a decomposition module (214) configured to:
receive, a high-level workflow specification corresponding to a task from an user;
decompose, the high-level workflow specification into a plurality of low-level workflow specifications;
execute, each low-level specification among the plurality of low-level workflow specifications in accordance with a plurality of workflow patterns, wherein the plurality of workflow patterns are generic predefined workflow patterns;
a QoS parameter generation module (216) configured to:
obtain, for each low-level specification, atleast one of a first set of Quality of Service (QoS) parameters using parameter extraction technique and a second set of QoS parameters using runtime optimal binding technique by executing each low-level workflow specification in accordance with each workflow pattern among the plurality of workflow patterns, wherein the second set of QoS parameters are obtained if the first set of QoS parameters are non-compliant with a Service Level Agreement (SLA);
a workflow pattern identification module (218) configured to:
identify, an optimal workflow pattern from the plurality of workflow patterns for each low-level workflow specification based on one of:
the first set of QoS parameters, when the first set of QoS parameters are compliant with the SLA; and
the second set of QoS parameters, when the first set of QoS parameters are non-compliant and the second set of QoS parameters are compliant with the SLA, wherein the identified optimal workflow pattern has the maximum value of QoS parameters;
6. The workflow optimization system 102 as claimed in claim 5, wherein the QoS parameter generation module module 216 is configured to obtain the first set of Quality of Service (QoS) parameters using parameter extraction technique, comprises:
obtaining, random variables for each low-level workflow specification corresponding to each workflow pattern from latency distributions of each encapsulated function, wherein the latency distributions is obtained based on exponential completion time; and
obtaining, a first set of Quality of Service (QoS) parameters for each low-level workflow specification among the plurality of low-level workflow specifications in accordance with each workflow pattern among the plurality of workflow patterns based on the random variables and the Quality of Service (QoS) incremental rules;

7. The workflow optimization system 102 as claimed in claim 5, wherein the Quality of Service (QoS) incremental rules provides latency and cost values for corresponding workflow pattern; and

8. The workflow optimization system 102 as claimed in claim 1, wherein the QoS parameter generation module module 216 is configured to obtain second set of Quality of Service (QoS) parameters during runtime using runtime optimal binding technique, wherein the runtime optimal binding technique comprises:
executing, a parallel workflow pattern from the plurality of workflow patterns on each low-level workflow specification, wherein the parallel workflow pattern enables each low-level workflow specification to be performed simultaneously in multiple sites;
obtaining, random variables for each low-level workflow specification corresponding to parallel workflow pattern from latency distributions of each encapsulated function; and
obtaining, a second set of Quality of Service (QoS) parameters of each low-level workflow specification corresponding to parallel workflow pattern based on the random variables and the Quality of Service (QoS) incremental rules.

Documents

Application Documents

# Name Date
1 201721043231-STATEMENT OF UNDERTAKING (FORM 3) [01-12-2017(online)]_52.pdf 2017-12-01
2 201721043231-STATEMENT OF UNDERTAKING (FORM 3) [01-12-2017(online)].pdf 2017-12-01
3 201721043231-PROVISIONAL SPECIFICATION [01-12-2017(online)].pdf 2017-12-01
4 201721043231-FORM 1 [01-12-2017(online)]_56.pdf 2017-12-01
5 201721043231-FORM 1 [01-12-2017(online)].pdf 2017-12-01
6 201721043231-DRAWINGS [01-12-2017(online)]_54.pdf 2017-12-01
7 201721043231-DRAWINGS [01-12-2017(online)].pdf 2017-12-01
8 201721043231-Proof of Right (MANDATORY) [23-01-2018(online)].pdf 2018-01-23
9 201721043231-FORM-26 [23-01-2018(online)].pdf 2018-01-23
10 201721043231-ORIGINAL UNDER RULE 6 (1A)-310118.pdf 2018-08-11
11 201721043231-FORM 3 [29-11-2018(online)].pdf 2018-11-29
12 201721043231-FORM 18 [29-11-2018(online)].pdf 2018-11-29
13 201721043231-ENDORSEMENT BY INVENTORS [29-11-2018(online)].pdf 2018-11-29
14 201721043231-DRAWING [29-11-2018(online)].pdf 2018-11-29
15 201721043231-COMPLETE SPECIFICATION [29-11-2018(online)].pdf 2018-11-29
16 Abstract1.jpg 2019-02-08
17 201721043231-OTHERS [01-07-2021(online)].pdf 2021-07-01
18 201721043231-FER_SER_REPLY [01-07-2021(online)].pdf 2021-07-01
19 201721043231-COMPLETE SPECIFICATION [01-07-2021(online)].pdf 2021-07-01
20 201721043231-CLAIMS [01-07-2021(online)].pdf 2021-07-01
21 201721043231-FER.pdf 2021-10-18
22 201721043231-US(14)-HearingNotice-(HearingDate-14-02-2024).pdf 2024-01-12
23 201721043231-FORM-26 [13-02-2024(online)].pdf 2024-02-13
24 201721043231-Correspondence to notify the Controller [13-02-2024(online)].pdf 2024-02-13
25 201721043231-Written submissions and relevant documents [28-02-2024(online)].pdf 2024-02-28
26 201721043231-PatentCertificate29-04-2024.pdf 2024-04-29
27 201721043231-IntimationOfGrant29-04-2024.pdf 2024-04-29

Search Strategy

1 SearchStrategy_201721043231E_31-12-2020.pdf

ERegister / Renewals

3rd: 26 Jul 2024

From 01/12/2019 - To 01/12/2020

4th: 26 Jul 2024

From 01/12/2020 - To 01/12/2021

5th: 26 Jul 2024

From 01/12/2021 - To 01/12/2022

6th: 26 Jul 2024

From 01/12/2022 - To 01/12/2023

7th: 26 Jul 2024

From 01/12/2023 - To 01/12/2024

8th: 30 Nov 2024

From 01/12/2024 - To 01/12/2025