Specification
FIELD OF THE INVENTION
This invention relates to the field of Enterprise Communication Applications (EGAs) and more particularly to development and execution environments for EGAs.
BACKGROUND
An Enterprise Communication Application (EGA) comprises an enterprise application (for example pricing, customer relationship management, sales and order management, inventory management, etc.) integrated with one or more communications applications (for example, internet telephony, video conferencing, instant messaging, email, etc.). The integration of enterprise applications with real-time communications in EGAs may be used to solve problems related to human latency and a mobile workforce. Human latency is the time for people to respond to events. As such, human latency reduces an enterprise's ability to respond to customers and manage time-critical situations effectively. As an example, consider an Inventory Management System (IMS) which displays stock levels to users in a user-interface. In such an IMS, critical stock situations such as shortages and surpluses become visible only when a user logs into the system. A simple extension of such an IMS would be to incorporate instant-messaging so that the concerned users can be messaged as and when critical stock situations arise. A further extension would be to integrate a present system with the IMS so that messages are sent only to users who are available for taking action.
The increasingly mobile workforce is another key area in which deploying EGAs can offer advantages. For example, a company with its salespersons located in far-flung areas may use an EGA to ensure that all its salespersons have access to reliable and up-to-date pacing information and can, in turn, update sales data from their location.
The contact center applications are a prime example of EGA. A contact center solution involves multimedia communications as well as business workflows and enterprise applications for the contact center e.g. outbound telemarketing flows, inbound customer care flows, customer management, user management etc.
) '
Examples of EGAs include (a) applications that notify the administrators by email in the event of a problem condition in the stock situation of an inventory, (b) applications that help to resolve customer complaints by automatically notifying principal parties, (c) applications that prevent infrastructure problems by monitoring machine-to-machine communications, then initiating an emergency conference call in the event of a failure, (d) applications to organize emergency summits in to address a significant change in a business metric, such as a falling stock price (e) applications that confirm mobile bill payments, (f) applications for maintaining employee schedules, (g) applications that provide present status to know which users can be contacted in a given business process at any time, and (h) applications that facilitate communication and collaboration across multiple medium of communication according to business processes and workflows of the organization.
With the advent of new communications technologies such as voice, video, and the like, the advantages of combining communications applications with enterprise applications are all the more numerous. However, integrating communications applications with enterprise applications is a non-trivial problem and involves considerable effort during application development. This is because the requirements of enterprise applications and communications applications differ greatly. Communications applications such as telecom switching, instant messaging, and the like, are event-driven or asynchronous systems. In such systems, service requests are sent and received in the form of events that typically represent an occurrence requiring application processing. Further, communications applications are typically made of specialized light-weight components for high-speed, low-latency event processing. Enterprise applications, on the other hand, typically communicate with each other through synchronous service requests using Remote Procedure Call (RPC), for example. Further, application components in enterprise applications are typically heavy-weight data access objects with persistent lifetimes.
An ECA must solve the problem of integrating communications applications and enterprise applications. In a typical ECA, the communication applications would direct a burst of asynchronous service requests (or events) to the enterprise applications at intermittent intervals. The enterprise application should be able to process the events received from the communication applications as well as synchronous service requests received from users or other enterprise application
components in the system; considering the ordering, prioritization and parallelism requirements of the service requests. Without suitable integration with clear identification of the service request processing requirements may lead to improper throughput as well as response times for the service requests.
FIG. 1 shows one of the existing solutions for routing asynchronous service requests to enterprise application components 104 hosted by an enterprise application server 102 (e.g. Java 2 Platform, Enterprise Edition (J2EE)). This approach involves the use of Messaging Application Programming Interfaces (APIs) such as Java Message Service (JMS). A JMS implementation can be integrated with J2EE by using JMS in conjunction with the Message-Driven Beans (MDBs) of J2EE. However, such an approach is problematic since it does not make use of a Service Component Architecture (SCA) for communications applications. In the absence of containers that natively support event-driven applications, much development effort is required. For example, event processing with respect to ordering and parallelism, henceforth referred to as process control may conveniently be implemented in a container. As such, a developer creating an application using the container only needs to configure the process control for the application. Since Messaging APIs do not incorporate process control, a developer needs to spend considerable effort in coding for process control in the application. Further, this approach requires the developer to implement routing components 110 and a queue connection 112 to encode the message routing logic within the enterprise application server. Further yet, no universal standards exist regarding the Messaging APIs to be used. Thus, for example, a first application which is a JMS client may communicate with a second application only if the second application is a JMS client.
Figure 2 shows another solution for sending asynchronous service requests to enterprise applications. In this approach, enterprise application component 204s are hosted by an enterprise application server 202 (e.g. J2EE) and communications application component 208s by a communications application server 206 (e.g. Java Advanced Intelligent Networks (JAIN) Service Logic Execution Environment (SLEE)). Such an approach takes advantage of the container-based approach for developing and deploying applications. However, such an approach still requires considerable development effort while integrating enterprise application component 204s with communications application server 206. For example, for each enterprise application
integrated with JAIN SLEE, a resource adapter particular to that enterprise application needs to be implemented by the developer. Further, the requirement of separate application servers increases the effort during deployment and maintenance.
The preceding consideration of the prior art shows that developing and deploying EGAs is made difficult by the differing requirements of communications and enterprise applications. Thus, a need exists for a development and execution environment which (a) provides all the advantages of a container-based approach to application development for both communications and enterprise applications, and (b) allows for communications and enterprise applications to be integrated and co¬exist without additional development effort.
SUMMARY OF THE INVENTION
The present invention describes a DACX ComponentService framework which provides an execution environment to a plurality of application components, the plurality of application components including both enterprise application components and communications application components. The DACX ComponentService framework provides facilities for: (a) A container-based development of both enterprise applications and communications applications, and (b) Seamless integration and co-existence of enterprise applications with communications applications without additional development effort. Further, the plurality of application components may be hosted by the nodes of a distributed system. Thus, the DACX ComponentService framework can be used to develop and integrate enterprise applications and communications applications in a distributed system.
According to a preferred embodiment of the present invention, the DACX ComponentService framework provides a method for routing both synchronous and asynchronous service requests among a plurality of application components hosted by the nodes in a distributed system. A component service and associated application component are registered at a set of nodes in the DACX ComponentService framework. A requesting node in the DACX ComponentService framework requests for a service registered with the DACX ComponentService
framework. The requesting node sends a request for a service reference for the service. In response to the request a first node is identified where an application component instance of the application component associated with the service is to be created. The information about the application component instance and service method is encoded into a stub and sent to the requesting node.
The requesting node uses the stub to send service request for the service. The service request is routed to an execution node where the application component instance is running. The execution node may be the first node identified or a different node where the service is registered. The physical address of the execution node is retrieved by DACX ComponentService framework during runtime using the information about the application component instance contained in the service request. The property of determining the execution node during runtime makes the stub highly available. The service request is submitted in a message queue associated with the service. Queuing policy for the service is defined during registration of the service. Each message queue is assigned to a queue group. A queue group is configured with a scheduler and a thread pool. The thread pool has parameters to control minimum or maximum number of threads, thread priority, and other thread pool parameters. The scheduler schedules the submission of the service request from the message queue into a thread pool according to a scheduling algorithm. A thread is allocated from the thread pool to an application component instance which is going to execute the service request. The execution of a service request depends on service method invocation type of service method in the service request. Service method invocation type may be synchronous or asynchronous. For an asynchronous invocation, the service request may carry an additional response handler parameter. A delegate of the response handler parameter is created during execution which encodes return value of the service method invoked, into a response message and communicates it back to the requesting node. The response message is decoded at the requesting node to retrieve the return value of the service method.
During the execution of the service request by the thread from thread pool, DACX ComponentService framework keeps track of threads which execute the service request and subsequent service requests generated by them by assigning universally unique flow ids to the threads of execution. The flow ids are propagated
and assigned based on the service method invocation type in the service requests. The flow ids are then logged by the logger for every log message, providing unique flow information of logged messages spanning across multiple nodes in the distributed system.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a block diagram showing asynchronous invocation from a communications application to an enterprise application using Messaging APIs.
FIG. 2 is a block diagram showing asynchronous invocation from a communications application server to an enterprise application server.
FIG. 3A and 3B are schematics representing the DACX ComponentService Framework in a distributed system, in accordance with an embodiment of the invention.
FIG. 4 is a schematic showing an exemplary embodiment of Drishti Advanced communication Exchange or DACX, in accordance with an embodiment of the invention.
FIG. 5 is a schematic of the component controller of the DACX ComponentService framework, in accordance with an embodiment of the invention.
FIG. 6 is a flow diagram illustrating a method for routing service requests in DACX Component Service Framework, in accordance with an embodiment of the invention.
FIG. 7 is a flow diagram illustrating registration of a service with DACX ComponentService Framework, in accordance with an embodiment of the invention.
FIG. 8 is a flow diagram illustrating the service discovery process, in accordance with an embodiment of the invention.
FIG. 9 is a flow diagram illustrating the process of execution of a service request, in accordance with an embodiment of the invention.
FIG. 10 is a flow diagram illustrating the process of routing a service request from a requesting node to an execution node, in accordance with an embodiment of the invention.
\ ^
FIG. 11A and FIG. 11B are flow diagrams illustrating execution of a service method in a service request having asynchronous invocation, in DACX 304, in accordance with an embodiment of the invention.
FIG. 12A and FIG. 12B are flow diagrams illustrating execution of a service method in a service request having synchronous invocation, in DACX 304, in accordance with an embodiment of the invention.
FIG. 13 is a flow diagram illustrating example of a scheduling algorithm, in accordance with an embodiment of the invention.
FIG. 14 is a flow diagram illustrating the process of rewiring of an application component instance in case of node failures, in accordance with an embodiment of the invention.
FIG. 15 is a flow diagram illustrating the steps of flow id generation of threads executing service requests, in accordance with an embodiment of the invention.
FIG. 16 is a schematic representing a sample hierarchy of a primary service request and subsequent secondary service requests and flow ids of threads executing the primary and secondary service requests, in accordance with an embodiment of the invention.
DETAILED DESCRIPTION OF THE DRAWINGS
DACX ComponentService framework provides advantages of a container-based approach to application development for both enterprise applications and communications applications. In such an approach, problems of application integration and process control are solved by an application container. Moreover, DACX ComponentService framework does not require additional development work in terms of implementing routing components 110 and queue connection 112.
Further, DACX ComponentService framework does not require additional development work in terms of implementing resource adapter 210s while integrating enterprise applications and communications applications. DACX ComponentService framework provides an application container for both enterprise application components (EACs) and communication application components (CACs). An EAC
iv typically makes synchronous service requests and in turn, provides synchronous processing of the service requests. On the other hand, a CAC typically makes asynchronous service requests and in turn provides asynchronous processing of the service requests. DACX ComponentService framework addresses the problem of integrating the communications applications and the enterprise applications at the level of the application container itself. DACX ComponentService framework provides configuration options using which application developers may integrate the enterprise application component and the communications application component.
In the following description numerous specific details are set forth to provide a more thorough description of the present invention. Preferred embodiments are described to illustrate the present invention, not to limit its scope, which is defined by the claims. Those of ordinary skill in the art will recognize a variety of equivalent variations on the description that follows.
FIG. 3A and 3B are schematics representing the DACX ComponentService Framework in a distributed system, in accordance with an embodiment of the invention. According to an embodiment, the DACX ComponentService Framework comprises a plurality of nodes and Drishti Advanced Communication Exchange or DACX 304. FIG. 3A illustrates 4 nodes - node-1 302, node-2 302, node-3 302, and node- 4 302. A node can be, for example, a computer system. According to an embodiment, each of the plurality of nodes 302 hosts DACX 304. A node may have one or more services registered. A service has one or more methods, each method having an invocation type - synchronous or asynchronous. Each service is associated with an application component capable of executing the service. An application component is a building block for an application. Application components expose services to be used by other services and consume exposed services to achieve the desired functionality of the application components. An application component may be intended to perform any specific function in the enterprise communication application. To run an application component at a node, an instance of the application component is created at the node. The instance of an application component is referred to as an application component instance.
A node comprises application component associated with the service registered at the node. For example, service A is registered with node-1 302, service B is registered with node-2 302, and services A and B are both registered with node-
3 302. Thus, node-1 302 comprises application component 306 associated with service A; node-2 302 comprises application component 308 associated with service B; node-3 302 comprises both application component 306 and application component 308. A service X is registered with Node-4 302.
Further a service may be a component service or a non component service. Component services are highly available services. A highly available service is registered with multiple nodes. The presence of a component service at multiple nodes allows failover of application components from one node to another node making the service highly available in case of node failure(s). Failover of application components implies recreation of an application component instance at a new node when an old node running the application component instance fails. Service A and Service B are component services as each is registered at more than one node. A non-component service is only registered at a single node. Service X is non-component service and is available only at node-4.
DACX 304 is an application container for development of both EAC and CAC. DACX 304 is based on the principles of Service-Component Architecture for distributed systems. An application component in DACX 304 acts as an EAC for service methods with synchronous invocation and as CAC for service methods with asynchronous invocation. Thus, DACX 304 provides an execution environment for enterprise applications as well as communications applications.
FIG. 4 is a schematic showing an exemplary embodiment of DACX 304, in accordance with an embodiment of the invention.
Constituents of DACX 304 may be grouped under a sen/ices and components layer 402, a process control layer 404, and a messaging layer 406. Services and components layer 402 comprises modules that provide facilities related to services and application components. Application developers can incorporate these facilities into application implementations while creating applications using DACX 304.
Services and components layer 402 comprises a component controller 408, a service registrar 410, timer 412, a logger 414 and a metric collector 416. Component controller 408 manages functionality of application components. Component controller 408 is described in further detail in conjunction with FIG. 5.
k Service registrar 410 is used to register a service with DACX 304. A service
is registered at a node through creation of a service instance of the service at the node. For example, node 1 has service instance of Service A, node 2 has service instance of Service B, node 4 has service instance of Service X and node 3 has service instances of Service A and Service B. A service instance is an individual instance of a service to which service requests may be directed by a requesting node. For example, the requesting node can be node 2 directing a service request towards service instance of Service A. The service request is executed by the service instance in scope of the application component associated with the service, residing at an execution node. In above example, the execution node can be node 1 or node 3 where service A and application component 306 associated with service A, are registered. Any service request for Service A will be executed by service instance of Service A running on node 1 or node 3. A service request comprises a service method having either synchronous or asynchronous invocation. The process and requirements associated with the service registration are described in conjunction with FIG. 7.
Other modules present in services and components layer 402 provide functions that facilitate application development using DACX 304. Timer 412 is used to submit timer jobs that are to be executed after the lapse of variable time duration. Timer 412 is also used to support timer jobs that recur with a constant duration as well as rescheduling of jobs on need basis. Timer jobs are used to keep track of time lapse during execution of applications. For example, timer jobs may be used to track time lapse in execution of a service request. Timer 412 in DACX 304 extends the capability of queuing mechanism of process control layer 404 to allow submission of timer jobs to be executed in specific queues having specific queuing policies. Theses queues may be the queues where service requests are queued. This allows application developers to execute timer jobs in the queues along with other service requests, according to their ordering and parallelism requirement and possibly avoid the need to synchronize execution with the other service requests. For example, while a service request is sent by a requesting node to be processed, a timer job is submitted in a queue at the requesting node. The timer job will scheduled for execution from the queue in same manner as a service request is scheduled. Scheduling of service requests for execution is described later. While the timer job is
being executed, the requesting node waits for a response of the service request. If the response is not received before the execution of timer job is over, an exception will be raised that the service request response has not arrived within predefined time duration. Further, a timer job can be rescheduled for execution by timer 412 after its execution is over. For example, a timer job can be scheduled and rescheduled to keep track of time lapse in execution of a series of service requests sent from the requesting node at constant intervals.
Logger 414 provides facilities for logging application data which can be subsequently used while performing maintenance and servicing operations. Logger 414 may encapsulate any logging utility and, therefore, may log messages to different kinds of destinations supported by underlying logging utility, including files, consoles, operating system logs and the like. During execution of a service request, a log message is generated. The log message is associated with flow id of thread executing the service request; the flow id is logged in to a log file along with the log message. The process of assigning flow id to a thread is explained in detail in conjunction with FIG. 15.
Metric collector 416 records statistics for metrics related to service request execution. For example, statistics for average queuing latency and average servicing time. Average queuing latency refers to the time spent by a service request in a message queue. Message queue is described in detail below. Average servicing time refers to the time taken to process a service request, starting with service invocation. Metric collector 416 supports an extensive configuration for a message queue to allow measurement of the service request execution statistics to be collected per application component instance as well as per individual method of a service(s) associated with the message queue. Metric collector 416 also can collect statistics for a queue group at a summary level, allowing fine tuning of the application deployment to achieve desired processing needs. Further, the immediate and on the fly updation of the statistics with each service request being processed allows the information to be used in the scheduler 422 to react to the situation in order to achieve the desired results.
Process control layer 404 comprises a plurality of message queue 418s, one or more thread pools 420, scheduler 422, and thread controller 424.
FIG.4 illustrates two message queues, message queue-1 418 and message queue-2 418. Each of the plurality of message queues 418 is associated with one or more services. According to an embodiment of the invention, during registration of a service, a queuing policy is defined for the service and the service is assigned a particular queue ID which identifies a message queue associated with the service. For example, message queue-1 418 may be associated with Service A and message queue-2 418 may be associated with Service B. Further, a single message queue may be associated with more than one service. For example, message queue-1 418 may be associated with both Service A and Service B.
According to an embodiment of the invention, a message queue associated with a service stores service requests directed to a service instance of the service.
According to an embodiment of the invention, a message queue stores service requests for service methods with asynchronous invocations.
According to another embodiment of the invention, the message queue additionally stores service requests for service methods with synchronous invocations directed to a service instance of the service which need to be processed according to a sequence, e.g. the order in which they are received by DACX 304.
Queuing policy of a service defines the order of queuing of service requests in a message queue. For example, if the queuing policy is single threaded, then all the service requests, may it be for service methods with synchronous invocation or asynchronous invocation, need to be queued in the message queue. If the queuing policy is not single threaded, then all service methods with asynchronous invocations are queued in the message queue while all service methods with synchronous invocations are executed without queuing.
Thread pool 420 is a pool of threads with a variable number of threads to which a service request is submitted from a message queue for execution by one of the threads in thread pool 420. Each thread returns to thread pool 420 after executing a service request and is allocated a new service request which was submitted to thread pool 420.
Scheduler 422 manages scheduling of service requests in the message queues for submission to thread pool 420. Scheduler 422 runs a scheduling algorithm to check whether a service request from a message queue needs to be
't^ submitted to thread pool 420. The scheduling algorithm takes parameters for each message queue like the expected processing latency, service request priority, queuing policy requirements of a service and the like. Based on the result of the scheduling algorithm, scheduler 422 submits a service request to thread pool 420 for allocation of a thread.
The queuing policy of a service specifies additional strategy for scheduling execution of service requests in a message queue. There can be various strategies for scheduling the service requests stored in a message queue. Some of the strategies provided for in DACX 304 are:
1. At most one service request is picked for execution at a time.
2. Several service requests are picked for execution at a given time.
3. Service requests may be picked for execution based on discovery scope of the service requests.
4. Service requests may be picked for execution based on a priority assignment or reservation policy for end users. Such a priority assignment or reservation policy may be used to provide differentiated subscriptions to the end users. For example, the service requests from end users paying a higher subscription fee may have a higher priority compared to the service requests from end users paying a lower subscription fee.
The scheduling of service requests from message queues is further controlled through the creation of queue groups. Queue group-based processing control for queued service requests is described in conjunction with FIG. 13.
Thread controller 424 allocates threads from thread pool 420 to service instances of different services for execution of the service requests submitted to thread pool 420. Thread controller 424 manages the usage of thread pool 420 based on parameters configured by an administrator. For example, thread controller 424 may restrict the maximum number of threads in thread pool 420 at any given time and the number of service requests submitted to thread pool 420 for execution.
Messaging layer 406 routes messages between nodes. The messages may be service requests, request for service reference, response messages, service registration, discovery and association messages and the like.
A request for service reference generated by a requesting node is routed by messaging layer 406 to component controller 408. Messaging layer 406 encodes application component instance information received from component controller 408 into a stub and routes the stub to the requesting node. The stub is used by the requesting node to send a service request to an execution node.
Messaging layer 406 encodes the service request into a message and routes it to the execution node. The execution node hosts an application component and associated service instance of the service, wherein the service instance executes the service request. After execution, service instance at the execution node generates a return value. Messaging layer 406 encodes the return value into a response message and routes it back to the requesting node.
FIG. 5 is a schematic of component controller 408 of the DACX ComponentService framework, in accordance with an embodiment of the invention. Component controller 408 comprises a component factory 502 and component context controller 504.
Component factory 502 performs service discovery process. The service discovery process is a requirement in a distributed system in which a plurality of nodes 302 hosts application components. In such a system, application components may become non-viable under a variety of circumstances, for example, the congestion of network channels, cyber attacks, power failures, system crashes and the like. Further, a distributed system may include mobile nodes communicating with other nodes through wireless channels. Movement of a mobile node beyond the range of a wireless network results in unavailability of application components hosted by the mobile node. Thus unavailability of a node can hamper availability of services. Therefore it is required that additional nodes should be present to which service requests can be rewired in case of node unavailability. For example, Service B is registered with node 2 and node 3. In case node 2 fails or goes out of range of wireless network, service requests for Service B can be routed to node 3. Here node 3 serves as additional node for Service B. During service discovery process, a node
^ capable of running an application component instance associated with service is identified. In above example, if node 2 is unavailable, then during service discovery process node 3 will be identified for executing service requests related to Service B. The process of rewiring an application component instance to a new node in case of node failure is described in conjunction with FIG. 14.
The service discovery process is initiated in response to a request for service reference having a discovery scope. Each valid discovery scope gets binded to an application component instance associated with a service. Subsequent requests for service reference having same discovery scope leads to immediate mapping of the serving application component instance with the requests for service reference, until the binding is explicitly removed. For example, node 2 sends a request for service reference of Service A with discovery scope D1. Node 1 is running multiple application component instances of application component 306 having different discovery scopes. Component factory 502 tries to map discovery scopes of the request for service reference with the discovery scope of the application component instances running at node 1. In case the discovery scopes maps with application component instance A1, then component factory 502 binds the application component instance A1 with the request to service reference. Any future request for service reference of service A with discovery scope D1 will be binded to the application component instance A1 till it is functional. If the application component instance A1 stops, future request for service reference with discovery scope D1 will be binded to a second application component instance with discovery scope D1. The second application component instance may be running on node 1 itself or on node 3 where Service A is registered.
If there is no binding existing for a request for service reference of Service A, component factory 502 for application component 306 is invoked by DACX 304 to take a decision to bind the request for service reference to an existing application component instance or to create a new application component instance. In previous example, if the request for service reference with discovery scope D1 doesn't match with discovery scope of any of the multiple application component instances running at node 1, then component factory 502 takes a decision where to create a new application component instance of application component 306 with discovery scope
D1. The new application component instance may be present on node 1 or node 3 depending on load distribution policy of application component 306.
After service discovery process, component factory 502 returns application component instance information to messaging layer 406. The application component instance information comprises id of the application component instance binded with the request for service reference and replica of service methods of the service. The application component instance information is encoded into a stub by messaging layer 406.
A component factory contract associated with an application component defines the load distribution policy for the application component. Load distribution policy is defined during registration of a service and its associated application component. Load distribution policy defines binding of a request for service reference of the service and subsequent service requests, with an application component instance of the application component. For example, a load distribution policy can define a binding such as, any request for service reference of Service A received from node 2 will be binded with application component instance A1 of application component 306 at node 1 and any request for service reference for Service A from node 4 will be binded to application component instance A2 at node 3. Further, the load distribution policy can also define the maximum number of binding to an application component instance. For example, maximum number of binding for application component instance A1 can be defined as 10, In case the maximum number has reached, any further request for service reference will be binded to a different application component instance running either at node 1 or node 3.
Component factory 502 further comprises component handler 506. Component handler 506 performs life-cycle management for application components. The life cycle of an application component instance is described by the following states that it may be in:
1. Started: The application component instance is made available to DACX 304 and thus can be discovered.
a) Initialized: The application component instance is initializing and cannot serve requests but is available for discovery. All service requests made
dunng this period would be queued at messaging queues associated with the service and would be served once initialization is complete.
2. Active: The application component instance is active and it is serving service requests.
3. Stopped: The application component instance is no longer available for serving service requests.
Component handler 506 provides the functionality for starting, initializing and stopping an application component instance.
A component handler contract defines the life cycle management operations for an application component instance of application component associated with a service. According to an embodiment of the invention, the component handler contract is used to configure how starting, initialization, and stopping are performed for an application component instance.
According to an embodiment of the invention, the component handler contract and the component factory contract are required for registering an application component with DACX 304. Application component should be registered with DACX 304 in order to be made available for the service discovery process and for execution of service requests by service instance of the service.
Component context controller 504 manages and updates state of application component instances. The state of an application component instance is stored in a generic data structure called component context with DACX 304. The information of state of an application component instance is used during node failures for recreation of the application component instance at another node where application component to which the application component instance is associated is present.
For the description of FIG. 6 to FIG. 12, the following example is used to explain the invention and various embodiments: node 2 is requesting node which generates a service request for Service A. Either of node 1 or node 3 executes the service request for service A. Hence node 1 or node 3 can be execution node. Service A comprises service methods with synchronous as well as asynchronous invocations.
FIG. 6 is a flow diagram illustrating a method for routing service requests in ..^ACX Component Service Framework, in accordance with an embodiment of the invention.
At step 602, Service A is registered with at least one node, for example, service A may be registered with node 1. The step of registering Service A is described in detail in conjunction with FIG. 7.
At step 604, a request for service reference of Service A is received from node 2 which is the requesting node. The request for the service reference comprises of the discovery scope, typically id and type of application component requesting the service reference. The application component requesting the service reference is hosted by node 2. The type of an application component is used to identify the application component i.e. every application component is registered with a type or name with the framework.
At step 606, in response to the request for service reference, an application component instance of application component 306 is discovered to which the request for service reference and following service requests related to Service A will be binded. The step of discovering the application component instance is described in detail in conjunction with FIG. 8.
At step 608, a stub is sent to node 2 in response to the request for service reference. The stub comprises information about service methods types i.e. whether the service methods of Service A are synchronous or asynchronous. For a non-component service, the stub further comprises physical address of the node at which the non-component service is registered. The physical address may be the node id which is a unique runtime identifier and also acts as the unique address of the node for non-component service requests to be routed to. In case of Service A (component service), the stub further comprises application component instance information. The application component instance information comprises logical address of the execution node through the id of application component instance associated with Service A which was discovered, and replica of service methods of Service A. The application component instance id is used during runtime to retrieve physical address of the execution node where the application component instance is running.
At step 610, at least one service request for Service A is received from node 2. Service A comprises one or more methods whose information is sent in the stub to node 2. Node 2 uses the information about service methods in the stub to generate a service request. Each service request comprises details for invocation of one service method of Service A. For invocation of each service method of Service A, multiple service requests, wherein each service request comprising details of one service method of Service A, needs to be generated. A service request further comprises id of the application component instance in the stub, the service name and parameters required for invocation of service method in the service request.
At step 612, the service request is routed to the execution node. The step of routing is described in detail in conjunction with FIG. 10.
FIG. 7 is a flow diagram illustrating registration of Service A with DACX 304, in accordance with an embodiment of the invention.
At step 702, Service A is registered at a node 1 of DACX 304. Service registration is done by service registrar 410. Prior to registering Service A, a service contract for Service A must be defined and implemented. The service contract specifies what operations Service A supports. For example, a service contract may be defined as a Java interface in which each service method corresponds to a specific service operation. The service contract may then be implemented by application component 306 associated with service A. In the above example, implementing a service contract would involve writing a Java class that implements the Java interface.
For registration of Service A, application component 306 associated with Service A needs to be registered with node 1. Service A registration further comprises defining a component factory contract and component handler contract for application component 306. Additional information such as queuing policy for Service A is also defined during registration of Service A.
At step 704, a decision is made whether Service A needs to be highly available. According to an embodiment, the decision is made by an administrator. For a highly available service, the service needs to be registered at more than one node, such that in case of a node failure, rewiring to other node running service
^ instance of the service can be done to l
Documents
Application Documents
| # |
Name |
Date |
| 1 |
3306-CHE-2008 FORM-18 04-10-2010.pdf |
2010-10-04 |
| 1 |
3306-CHE-2008-AbandonedLetter.pdf |
2017-09-06 |
| 2 |
3306-CHE-2008-FER.pdf |
2017-02-27 |
| 2 |
3306-che-2008 form-5.pdf |
2011-09-04 |
| 3 |
3306-che-2008 form-3.pdf |
2011-09-04 |
| 3 |
3306-che-2008 abstract.pdf |
2011-09-04 |
| 4 |
3306-che-2008 form-26.pdf |
2011-09-04 |
| 4 |
3306-che-2008 claims.pdf |
2011-09-04 |
| 5 |
3306-che-2008 correspondence others.pdf |
2011-09-04 |
| 5 |
3306-che-2008 form-1.pdf |
2011-09-04 |
| 6 |
3306-che-2008 description (complete).pdf |
2011-09-04 |
| 6 |
3306-che-2008 drawings.pdf |
2011-09-04 |
| 7 |
3306-che-2008 description (complete).pdf |
2011-09-04 |
| 7 |
3306-che-2008 drawings.pdf |
2011-09-04 |
| 8 |
3306-che-2008 correspondence others.pdf |
2011-09-04 |
| 8 |
3306-che-2008 form-1.pdf |
2011-09-04 |
| 9 |
3306-che-2008 claims.pdf |
2011-09-04 |
| 9 |
3306-che-2008 form-26.pdf |
2011-09-04 |
| 10 |
3306-che-2008 form-3.pdf |
2011-09-04 |
| 10 |
3306-che-2008 abstract.pdf |
2011-09-04 |
| 11 |
3306-CHE-2008-FER.pdf |
2017-02-27 |
| 11 |
3306-che-2008 form-5.pdf |
2011-09-04 |
| 12 |
3306-CHE-2008-AbandonedLetter.pdf |
2017-09-06 |
| 12 |
3306-CHE-2008 FORM-18 04-10-2010.pdf |
2010-10-04 |
Search Strategy
| 1 |
searchstrategy_12-01-2017.pdf |