Abstract: A SYSTEM AND METHOD FOR PRIORITY SCHEDULING OF MESSAGE TYPES WITH SERIALIZATION CONSTRAINTS AND DYNAMIC CLASS SWITCHING The present invention relates to a system and method for priority queuing of messages with dynamic class switching of message types and with allocation constraints. The system and method enables optimal and lightweight allocation of resources for prioritizing the scheduling of messages. The system and method of the present invention further enables optimal load balancing of messages while preserving serialization constraints.
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECJFJCATJON
(See Section 10 and Rule 13)
Title of invention:
A SYSTEM AND METHOD FOR PRIORITY SCHEDULING OF MESSAGE TYPES WITH
SERIALIZATION CONSTRAINTS AND DYNAMIC CLASS SWITCHING
Applicant
TATA Consultancy Services Limited A company Incorporated in India under The Companies Act, 1956
Having address:
Nirma) Building, 9th Floor,
Nariman Point, Mumbai 400021,
Maharashtra, India
The following specification particularly describes the invention and the manner in which it is to be performed.
FIELD OF THE INVENTION
The present invention relates to a system and method for priority queuing of messages with dynamic class switching of message types and with allocation constraints. More particularly the invention relates to a system and method for optimal load balancing of messages and preserving the lightweight allocation resources of the applications.
BACKGROUND OF THE INVENTION
Priority queue, an abstract data type in computer programming is most commonly used in resource allocation for operating systems. Priority queuing has gained industrial importance for communication from back office to front office and for order management systems in manufacturing industries.
It is commonly employed for operating systems in various applications such as telecommunication networks, banking and financial services applications etc.
The typical scheduling systems in the current state of art provides priority queueing in First Come First Serve (FCFS) order and thereby provides restrictive serialization, but one of the disadvantages of the typical system is that the messages related to trade orders which are of less priority cannot be processed and dispatched in parallel without hindering and affecting the process of the high priority messages.
Another long faced problem in priority scheduling occurs when two high priority messages not related to each other are mapped to the same queue of the processing servers. The typical scheduling system in the current state of art fails to schedule a new process for the second message on any other idle processing servers.
Yet another challenging aspect in priority scheduling of the messages is to provide optimal message load balancing while preserving the serialization constraints for the messages.
Some of the inventions in the current state of art which deals to provide system and methods for priority queue are as follows:
US patent 7689998 titled "Systems and methods that manage processing resources" provides port-based technique to manage processing resources that can be utilized by an Operating System. Though '998 patent provides priority of a request, which is assumed to be static and known at the time of scheduling, it necessarily fails to provide optimal allocation under constraints. Further, '998 patent restricts the assumption for a finite number of priority queues or classes.
US patent 7065766 titled "Apparatus and method for load balancing of fixed priority threads in a multiple run queue environment" provides an apparatus and method for load balancing fixed priority threads in a multiprocessor system. M oreover,' 766 pat ent assumes time slicing and pre-emption of requests in service to deal with toad balancing of fixed priority threads across multiple priority queues, where a higher priority thread may displace a lower priority thread. Further, 766 patent does not discuss about the serialization constraints imposed for the same request class (baseid).
US patent 5333319 titled "Virtual storage data processor with enhanced dispatching priority allocation of CPU resources" discusses about improvements in dispatching messages in operating systems for multi programmed data processing systems. Moreover, '319 patent maintains common priority of multiple processes serving a given application even if they are across different address spaces; and further deals with serialization through a single threaded execution of a given queue, Though '319 patent discusses about improvements in dispatching messages in operating systems for multi programmed data processing systems, it fails to provide optimal load balancing and neither discusses about serialization constraints at an application level.
US patent 6567840 titled "Task scheduling and message passing" discloses method for modeling real¬time periodic task scheduling and passing messages within multi task systems. Though '097 patent relates to the area of real time scheduling under deadlines where priority depends on the application characteristics and is inversely proportional to deadline, it fails to discuss about scheduling messages with priority under serialization constraints and optimality of load balancing.
US patent 4794926 titled "Microcomputer with priority scheduling" discloses a microcomputer comprising memory and a process which is arranged to execute a plurality of concurrent processes and share its time between them. Though '097 patent discusses about scheduling of processes within a single processor with static priorities attached to processes, it fails to discuss scheduling of process with multiprocessor and dynamic priorities and further does not discuss about elements of serialization.
In light of the above mentioned prior arts it is evident that the current state of art restricts the scheduling of processes within a single multiprocessor having static priorities rather than a multiprocessor with dynamic priorities.
Hence there is a need in the art for a customizable solution for:
1. Prioritizing the scheduling of multiple messages based on the scheduling requirements, scheduler designs and scheduler design requirements.
2. Optimally load balancing messages while preserving the serialization constraints.
In order to address the long felt need of such a solution, the present invention provides a system and method to overcome all the above mentioned limitations for priority scheduling of the multiple messages with allocation constraints and dynamic class switching.
OBJECTS OF THE INVENTION;
The principle object of the invention is to provide a system and method for priority queuing of messages with dynamic class switching and with allocation constraints.
Yet another object of the invention is to enable optimal load balancing of messages while preserving serialization constraints.
Yet another object of the present invention is to provide and maintain the lightweight allocation resources for prioritizing the scheduling of messages.
Yet another object of the invention is to provide and implement the scheduler design for priority scheduling.
Yet another object of the invention is to provide and meet the requirements of scheduler design for priority scheduling.
Yet another object of the invention is to prioritize the scheduling of multiple messages based on scheduler requirements.
Yet another object of the invention is to enable to execute the messages in micro seconds with largest overhead being the queue read and queue write.
SUMMARY OF THE INVENTION:
The present invention discloses a system and method for priority queuing with dynamic class switching and with allocation constraints.
The system and method of the present invention further to enables optimal load balancing of messages while preserving serialization constraints.
According to the present invention the method for priority scheduling and dynamic class switching of, message with restrictive serialization constraints; wherein the said method comprises the computer implemented steps of:
a) Storing an incoming message in the memory cache on receipt of the incoming message by a scheduler;
b) Assigning a baseid and messageid for the said messages by the first message queue of a dispatcher; wherein all the related messages have the same baseid and all the corresponding messages have a unique messageid.
c) Indexing the said messages based on the message type, assigned base id's and message id's by the first message queue of the dispatcher;
d) Verification of the base id's and message id's assigned to the said messages by the said dispatcher;
e) Checking the status of the outbound message processing server's by the second message queue;
f) Communicating the status of the outbound message processing server's as either "idle" or "processing" by the second message queue to the said dispatcher.
g) Assigning a messageid to the idle outbound message processing server;
h) Pulling out the message with the corresponding assigned messageid from the message cache
and either dispatching the same in the first come first serve order to the idle outbound message
processing server in case of no conflicts in the base id's. In case of conflict in base id's the said
message is dispatched to the pending list of the said dispatcher; i) Invoking a service call for a messageid on receipt of the corresponding message for the
messageid by the idle outbound message processing server; j) Communicating the change in status of the outbound message processing server from "idle" to
"processing" by the second message queue "SvcOverQ" to the said dispatcher; k) Communicating the completion of the processing of the messageid by the outbound message
processing server to the dispatcher by the second message queue "SvcOverQ". 1) Switching to the pending message list of the dispatcher for processing the pending messages on
completion of all the messages with no serialization conflicts. m) Repeating steps d) to k) for optimal load balancing of messages with priority scheduling of
message types.
BRIEF DESCRIPTION OF THE DRAWINGS:
The foregoing summary, as well as the following detailed description of preferred embodiments, is better understood when read in conjunction with the appended drawings. For the purpose of illustrating the
invention, there is shown in the drawings, example constructions of the invention; however, the invention is not limited to the specific methods disclosed in the drawings:
Figure 1 of the present invention illustrates the schematic representation of a typical scheduler system for priority queueing.
Figure 2 of the present invention illustrates a typical system for baseline scheduling without load balancing and serialization constraints for priority queueing based on baseid mod K.
Figure 3 of the present invention illustrates a scheduling system and method to provide optimal allocation of messages under serialization requirement R1.
Figure 4 of the present invention illustrates a scheduling system and method to provide optimal allocation of messages under serialization requirement R2.
Figure 5 of the present invention illustrates a system and method for scheduling both the type 1 and type 2 messages simultaneously and providing optimal allocation of messages under serialization requirements R1 and R2.
DETAILED DESCRIPTION OF THE INVENTION;
Before the present method and hardware enablement are described, it is to be understood that this invention in not limited to the particular methodologies, and hardware's described, as these may vary, it is also to be understood that the terminology used in the description is for the purpose of describing the particular versions or embodiments only, and is not intended to limit the scope of the present invention which wilt be limited oniy by the appended claims. The words "comprising," "having," "containing," and "including," and other forms thereof, are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. The disclosed embodiments are exemplary methods of carrying out the invention, which may be embodied in various forms.
The invention is described in the given embodiments below which is provided only to illustrate the invention and therefore should not be construed to limit the scope of the invention.
In one of the significant embodiment of the present invention the system of the present invention for priority scheduling and dynamic class switching of message types with both restrictive and less restrictive serialization constraints comprising:
a) a scheduler for receiving an incoming message; wherein the scheduler comprises of
(1) a memory cache for storing the incoming messages; wherein an incoming message comprises of a type 1 message and type 2 message.
(2) at least one outbound message processing server for processing the said incoming message;
(3) at least one dispatcher for dispatching the message to the said outbound message processing server; wherein the dispatcher further comprises of:
(a) at least one first message queue for assigning a baseid and messageid to the said corresponding message; wherein all the related messages have the same baseid and all the corresponding messages have a unique message.id.
(b) At least one second message queue "SvcOverQ" for communicating the status of the said outbound message processing server to the dispatcher for optimal load balancing of messages with priority scheduling of message types;
(c) Pending message list to store the list of pending type 1 messages;
(d) Type 2 message list to store the list of type 2 messages.
Figure 1 of the present invention illustrates the schematic representation of a typical scheduler system for priority queueing.
According to one of the embodiments of the present invention, the scheduler 300 receives incoming messages 100, which are stored in a memory cache 200; and are further accessed and indexed by the identifier of the memory cache 200 based on the messageid.
Based on the scheduling requirements, the incoming messages 100 may be either a
Type 1 message 100(a) = order messages, which includes but not limited to order modifications,
cancellations of a given order.
Type 2 message 100(b) = Trade messages
Further, these message types consists of a "baseid", "orderid" and a unique identifier "messageid"; wherein the unique identifier is unique for each and every message while the baseid is same for all messages that are related to each other. For e.g. order confirmation, trade and order cancellation for the same order would have same baseid.
According to another embodiment of the invention, the received incoming messages 100 are then further dispatched on a set of K outbound message processing servers 400 in a First Come First Serve (FCFS) order by the dispatcher 600 of the said scheduler 300.
The K outbound message processing servers 400 makes synchronous calls for processing the incoming message 100 in a FCFS order i.e., they will process the next incoming message only after completion of processing of the message at head of the queue.
According to one of the embodiment of the present invention, the typical scheduler system for priority
queueing consists of two types of serialization requirements:
R1 = It is restrictive scheduling requirement; wherein all messages pertaining to the same baseid are processed in FCFS order. This scheduling requirement is more specifically for Type 1 messages 100(a)
R2 = It is less restrictive scheduling requirement; wherein the messages may be processed in any order. This scheduling requirement is more specifically for Type 2 messages 100(b); wherein the type 1 messages 100(a) have higher priority than the type II messages.
According to another embodiment, the present invention follows the following assumptions: Assumption 1: The output service call takes of the order of 10ms or more for processing a message. There are at most (K=) 50 output messages being processed concurrently at any time. In other words, at most K=50 output server operating system processes are actively processing messages at any point in time.
Assumption 2: All messages received on that have not been processed by the output server are stored in memory in a cache accessible by the scheduler 600. Each message can be accessed given its messageid.
Figure 2 of the present invention illustrates a typical system for baseline scheduling without load balancing and serialization constraints for priority queueing based on baseid mod K. The scheduler of the typical system for baseline scheduling without load balancing and serialization constraints receives the incoming messages 100, which are stored in a memory cache 200; and are further accessed and indexed based on the messageid and baseid by the first message queue 500 of the dispatcher 600.
Further according to another embodiment of the present invention, the incoming message may be a typel message 100(a) or type 2 messages 100(b).
According to one of the embodiments of the present invention, the K outbound processing servers 400 makes synchronous calls for incoming message 100 by invoking a service call and does not return until the service call completes the processing.
On receipt of the service call, the first message queue 500 of the dispatcher 600 pulls out and reads a particular stored message having a "messageid" from memory cache 200 in a FCFS order, computes the baseid mod K 700 and then further, dispatches the message to the relevant set of K outbound message processing servers 400; wherein the processes are numbered from 0 to K-1, in a First Come First Serve (FCFS) order.
Further, the stored message would be distributed to K outbound message processing servers 400 based on the baseid given to all the related messages such as but not limited to confirmation, modification, cancellation and trades.
Further, for a given baseid all the related messages would be dispatched to the same server's input queue; wherein the server processes the messages in the queue in FCFS order and thereby fulfills the serialization requirement R1 i.e. restrictive serialization. But one of the disadvantages of the typical system is that the type 2 messages dispatched with the baseid based routing cannot be processed in parallel for a given baseid. Further, if two type 1 messages 100(a) with different baseids are mapped to the same queue of the K outbound message processing servers 400; the second message will not be processed on any other idle processing servers.
Figure 3 of the present invention illustrates a scheduling system and method to provide optimal allocation of messages under serialization requirement R1.
Scheduler 600 receives the incoming messages 100, which are stored in a memory cache 200; and are further accessed and indexed based on the messageid and baseid 700 by the input message queue 500 of the dispatcher 600.
According to one of the embodiments of the present invention, the incoming message 100 may be type 1 or type 2 messages.
Further, the dispatcher 600 checks for the serialization requirement R1 against the input message 100, if there are no serialization conflicts then the message is dispatched to the idle output server 400 for processing. But, if there is a serialization conflict, then the dispatcher dispatches the messageid to the pending list of the dispatcher.
The logic for Output Server 400 is given below:
loop forever
Wait on queue for next message id;
Receive messageid;
Pull out corresponding message from in memory message cache; Invoke service for given message and wait for response; Send to SvcOverQ;
end loop
Further, second message queue "SvcOverQ" 800 is used to communicate the process status of the K outbound message processing servers 400 to the Dispatcher 600.
During initialization of the scheduler system, the second message queue "SvcOverQ" 800 has one message 100 for each K outbound message processing servers 400 signifying that all these K outbound message processing servers 400 are ready to do work.
Further, second message queue "SvcOverQ" 800 provides whether a given process has been completed {Serverid is given for this purpose) by the K outbound message processing server 400.
On completion of the given process by the K outbound message processing server 400; the second message queue "SvcOverQ" 800 communicates the same to the dispatcher 600.
The dispatcher 600 works only if there is an idle K outbound message processing server 400 available to start the processing.
The logic for the Dispatcher is given below.
loop forever
Wait on SvcOverQ for next output server to become free; Receive from SvcOverQ; CIeanup(Serverid, baseid, PendingList); // will be explained later Set messagefoundforSvc = FALSE; while (messagefoundforSvc == FALSE) do if (PendingList is Empty) then
messagefoundforSvc = B\ockIngReadi(InputMessageQueue, Serverid); else
messagefoundforSvc = NonB!ockingRead(Input MessageQueue,
Serverid); if (messagefoundforSvc == EMPTYQUEUE) then
messagefoundforSvc = ProcessFromPendingList(Serverid); endif endif end while
end loop
function BlockingRead(InputMessageQueue) // valid only if pendingiist is empty begin
Read from InputMessageQueue;
return ProcessNewBaseid(baseid, messageid); end
function NonB\ockingRead(InputMessageQueue) begin
Read InputMessageQueue in non blocking mode for next ; if (message is available in InputMessageQueue) then
Pull out entry from BaseidHash table pertaining to this baseid; If (no entry exists in BaseidHash table) then
return ProcessNewBaseid(baseid, messageid); else // Insert in to Pendingiist
follow pointer to node as obtained from hash table; put messageid at tail of FCFS subtist of this node; return FALSE; endif else
return EMPTYQUEUE; end
procedure Cleanup(Serverid, baseid, PendingList) begin
Hash in to BaseidHash table with baseid;
Follow to node via pointer in hash table;
if (messageid list for baseid is empty) then
remove node from pending list and baseid from hash table;
else
mark status as IDLE in node and in hash table;
endif end
function ProcessFromPendingList(Serverid) begin
returnvalue = FALSE;
scan through list for first node with status == IDLE;
if (node is found) then
mark node status and hash table status to PROCESSING; remove messageid from node in FCFS order; send messageid to input queue of Server Serverid;
return value = TRUE; endif;
return returnvalue; end
function ProcessNewBaseid(baseid, messageid) begin
create node N;
insert baseid in BaseidMash table;
set status = PROCESSING in N as well as hash table;
put pointer to N in hash table entry associated with ;
send messageid to input queue of Server Sen/end;
return TRUE;
end
Further, according to another embodiment of the present invention, the dispatcher 600 reads the messages 100 from first message queue 500 and puts them in the pending message list 900 if they conflict with an existing baseid for a message 100 being processed. If there is no conflict then the message 100 is directly sent to the K outbound message processing servers 400.
The incoming messages 100 are given priority over pending messages in the pending message list 900. The dispatcher 600 further switches to the pending message list 900 to send the pending messages to the K outbound message processing servers 400, when the input message queue 500 is empty.
Further, the "baseid" of the message is set to status "idle"; so that any other idle K outbound message processing servers 400 can pick up the next baseid in the queue for further processing and thereby ensures that messages for a given baseid are not bound to any particular K outbound message processing servers 400.
Once the baseid message is picked up by the idle K outbound message processing servers 400 from the pending message list 900; the status of baseid is changed to "processing"; thereby ensuring that no other baseid message is being processed until the service at the K outbound message processing servers 400 is completed and thereby fulfills the serialization requirement R1 along with optimal load balancing.
Figure 4 of the present invention illustrates a scheduling system and method to provide optimal allocation of messages under serialization requirement R2.
Incoming messages 100, received by the scheduler system are stored in a memory cache 200; and are further accessed and indexed based on the type of messages, messageid and baseid 700 by the first message queue 500 of the dispatcher 600.
According to one of the embodiments of the present invention, the incoming message 100 may be a type
1 message 100(a) or type 2 message 100(b)
The scheduler system prioritizes the messages under the serialization requirement R2; wherein the type
2 messages 100(b) are not processed in FCFS order for messages that are related to a particular baseid
and further the type 2 messages 100(b) have a lower priority as compared to type 1 messages 100(a).
During initialization of the scheduler system, the second message queue "SvcOverQ" 800 processes one message for each K outbound message processing servers 400 signifying that all these servers are ready to do work.
Further, the second message queue "SvcOverQ" 800 provides information whether a given process for a K outbound message processing servers 400 has been completed (Serverid is given for this purpose).
On completion of the given output server process; the second message queue "SvcOverQ" 800 communicates the same to the dispatcher 600. The dispatcher 600 works only if there is an idle K outbound message processing servers 400 available to do the next processing.
Further, the dispatcher 600 checks for the serialization requirement R2 against the input message 100, if a type 1 message 100(a) with high priority is to be processed and there are no serialization conflicts then the type 1 message 100(a) is dispatched to the idle K outbound message processing servers 400 for processing.
But, if the type 1 message 100(a) with high priority is to be processed and there is a serialization conflict, then the dispatcher 600 dispatches the messageid of the type 1 message 100(a) to the pending list 900 of the dispatcher 600.
Further, if the input message is a type 2 message 100(b), which has lower priority than type 1 messages 100(a) then the type 2 messages 100(b) are queued in a third message queue Type2List" 1000. The third message queue "Type2List" 1000 gets lower priority as compared to the type 1 messages 100(a) pending in the pending message list 900 as per the serialization requirement R2 and further prevents the clogging of the path for type 1 messages 100(a) on occurrence of a burst of type 2 messages 100(b) (e.g. a trade burst from an exchange).
The third message queue Type2List" 1000 comprises of a list of nodes (note that there is no status field for these), each of which has a linked list of type 2 messages 100(b) to be processed.
On completion of the processing, the K outbound message processing servers 400, puts across the second message queue "SvcOverQ" 800 the following message ; wherein the Type2Flag is TRUE if the last message processed by the server was a type 2 message 100(b) otherwise it is set to FALSE.
The logic for K outbound message processing servers 400 is given below:
loop forever
Wait on queue for next message id;
Receive messageid;
Pull out corresponding message from in memory message cache;
Invoke service for given message and wait for response;
Send to SvcOverQ; end loop
The logic for the Dispatcher of scheduling system described in figure 4 has the following changes when compared with the logic for the dispatcher of scheduling system described in figure 3:
• Cleanup:
o If Type2Flag is TRUE then it hashes in to the BaseidHash table and follows the pointer to the Type2 list for the given . If there is no further type2 message in the list it deletes the baseid node in the Type2List and marks the BaseidHash table entry for Type2 pointer as NULL. If the hash table pointer for pending list is also NULL then it deletes the entry completely from hash table.
o If the Type2Flag is FALSE then the processing is identical to Option S2 with one difference. If there is no message pending for a given then the pointer to pending list for the baseid is set to NULL. Only if both pending list and type2 list pointers are NULL can the baseid be removed fully from hash table.
• NonBlockingRead:
The major difference in S3 is that whenever a type 2 message is read it is put in to the. Type2List. First a check is done against BaseidHash table. If an entry exists for baseid but the type2 pointer is NULL then a new node is created in the Type2List and pointer to this node is added in the hash table. The type2 messageid is inserted in to the linked list
for this node. The N on Blocking Read function returns FALSE for a type2 message even if BaseidHash table was empty to start with,
• ProcessFromPendingList undergoes no change since PendingList caters only to typel messages and not to type2 messages.
• ProcessFromType2List: This is a new function to be added last in the main while loop of S2. Essentially the priority is first process new typel messages, then Pending List messages, and only then type2 messages. In such a case a type2 message is picked up from the Type2List and sent to the output server which has just finished processing. We could keep a global list of type2 messages but in case there is a burst for a single baseid we would like options for fairness across baseids. In this case, we can go in round robin order across the nodes in Type2List serving one type2 message at a time from each node. We anyway have the baseid whenever art output server completes, so moving to the next one in the list should be trivial.
• BlockinqRead: In this case there is no work to do from PendingList or from Type2List. Whatever message is read is processed regardless of whether it is an typel message or type2 message since there is an output server idle. There is an exception to this which we will discuss below.
According to another embodiment of the present invention, the scheduling system further comprises of "High water mark H" on the K outbound message processing servers 400 for servicing the type 2 messages 100(b) in instances; wherein the type 2 message 100(b) processing is slower than the type 1 message 100(a) processing.
The scheduling system further provides lightweight and optimal processing by providing a high overhead service times of the order of several micro seconds in K outbound message processing servers 400. Thus a burst of 1000 type 2 messages 100(b) can be easily processed by the dispatcher 600 in one second.
On occurrence of burst of type 2 messages 100(b); wherein no type 1 messages are to be processed at the K outbound message processing servers 400, the type 2 messages will be automatically sent to the idle K outbound message processing servers 400 for further processing.
Further, during this process if any type 1 message 100(a) comes in then the "High water mark H" on the output server marks the last type 2 message 100(b) processed as "H" and switches to process the new available typel message 100(a). On completion of the processing of the type 1 message and if no more type 1 messages are pending then the system switches to the "H" labeled type 2 message and continues the processing of the remaining type 2 messages.
According to one of the embodiments of the present invention, the "High water mark H" is configurable and the dispatcher needs to access it for every cycle. Further, the dispatcher additionally keeps a count of
output server processes that services type 2 messages and ensure that they are well within the water mark H.
Figure 5 of the present invention illustrates a system and method for scheduling both the type 1 and type 2 messages simultaneously and providing optimal allocation of messages under serialization requirements R1 and R2.
Incoming messages 100 are received by the scheduler system which are stored in a memory cache 200; and are further accessed and separated by the router 1100 based on the type of message.
The type 1 messages 100(a) are indexed based on the messageid and baseid 700 by the type 1 first message queue 1200 of the type 1 dispatcher 1300;
Further, the type 2 messages 100(a) are indexed based on the messageid and baseid 700 by type 2 first message queues 1500 of the type 2 dispatchers 1600.
Further, the type 1 dispatcher 1300 checks for the serialization requirement R1 against the type 1 message 100(a), if there are no serialization conflicts then the type 1 message 100(a) is dispatched to the idle K outbound message processing servers 400 for processing. But, if there is a serialization conflict, then the dispatcher 600 dispatches the messageid to the pending list 900 of the dispatcher 600.
The logic for K outbound message processing servers 400 is given below:
loop forever
Wait on queue for next message id;
Receive messageid;
Pull out corresponding message from in memory message cache;
Invoke service for given message and wait for response;
Send to SvcOverQ; end loop
Further, second message queue "SvcOverQ" 800 is used to communicate the process status of the K outbound message processing servers 400 to the Dispatcher 600.
During initialization of the scheduler system, the second message queue "SvcOverQ" 800 has one type 1 message 100(a) for each K outbound message processing servers 400 signifying that all these K outbound message processing servers 400 are ready to do work.
Further, second message queue "SvcOverQ" 800 provides whether a given process has been completed (Serverid is given for this purpose) by the K outbound message processing server 400.
On completion of the given process by the K outbound message processing server 400; the second message queue "SvcOverQ" 800 communicates the same to the dispatcher 600.
The dispatcher 600 works only if there is an idle K outbound message processing server 400 available to start the processing.
The logic for the Dispatcher is given below.
loop forever
Wait on SvcOverQ for next output server to become free; Receive from SvcOverQ; Cleanup(Serverid, baseid, PendingList); // will be explained later Set messagefoundforSvc = FALSE; while (messagefoundforSvc == FALSE) do if (PendingList is Empty) then
messagefoundforSvc = BtockingRead(InputMessageQueue, Serverid)) else
messagefoundforSvc = NonBlockingRead (input MessageQueue,
Serverid); if (messagefoundforSvc == EMPTYQUEUE) then
messagefoundforSvc = ProcessFromPendingList(Serverid); endif endif end while end loop
function QlockmgRead(InputMessageQueue) // valid only if pendinglist is empty begin
Read from Inputueue;
return ProcesslMewBaseid(baseid, messageid); endMessageQ
function NonBlockingReadf/npt/tMessageQueue; begin
Read InputMessageQueue in non blocking mode for next ;
if (message is available in InputMessageQueue) then
Pull out entry from BaseidHash table pertaining to this baseid;
If (no entry exists in BaseidHash table) then
return ProcessNewBaseld(baseid, messageid); else // Insert in to PendingList
follow pointer to node as obtained from hash table; put messageid at tail of FCFS sublist of this node; return FALSE; endif else
return EMPTYQUEUE; end
procedure Cleanup(Serverid, baseid, PendingList) begin
Hash in to BaseidHash table with baseid;
Follow to node via pointer in hash table;
if (messageid list for baseid is empty) then
remove node from pending list and baseid from hash table;
else
mark status as IDLE in node and in hash table;
endif end
function ProcessFromPendingList(Serverid) begin
return value = FALSE;
scan through < baseid > list for first node with status == IDLE;
if (node is found) then
mark node status and hash table status to PROCESSING; remove messageid from node in FCFS order; send messageid to input queue of Server Serverid; return value = TRUE; endif;
return returnvaiue; end
function ProcessNewBaseid(baseid, messageid) begin
create node N;
insert baseid in BaseidHash table;
set status = PROCESSING in N as well as hash table;
put pointer to N in hash table entry associated with ;
send messageid to input queue of Server Serverid;
return TRUE; end
Further, according to another embodiment of the present invention, the dispatcher 600 reads the type 1 messages 100(a) from type 1 first message queue 1200 and puts them in the pending message list 900 if they conflict with an existing baseid for a type 1 message 100(a) being processed. If there is no conflict then the type 1 message 100(a) is directly sent to the K outbound message processing servers 400.
The incoming type 1 messages 100(a) are given priority over pending messages in the pending message list 900. The dispatcher 600 further switches to the pending message list 900 to send the pending messages to the K outbound message processing servers 400, when there are no more type 1 messagesl00(a) to be processed.
Further, the "baseid" of the message is set to status "idle"; so that any other idle K outbound message processing servers 400 can pick up the next baseid in the queue for further processing and thereby ensures that the type 1 messages 100(a) for a given baseid are not bound to any particular K outbound message processing servers 400.
Once the baseid message is picked up by the idle K outbound message processing servers 400 from the pending message list 900; the status of baseid is changed to "processing"; thereby ensuring that no other baseid message is being processed until the service at the K outbound message processing servers 400 is completed and thereby fulfills the serialization requirement R1 along with optimal load balancing.
According to another embodiment of the present invention, the type 2 messages 100(a) are indexed based on the messageid and baseid 700 by type 2 first message queues 1500 of the type 2 dispatchers 1600.
The type 2 dispatcher 1600 reads the type 2 messages 100(b) from type 2 first message queue 1500 and puts them in the third message queue "type2list" 1100; if the type 2 message 100(b) conflicts with an existing baseid message being processed.
But, if there is no conflict then the type 2 message is directly sent to the M bound output server 1400 for processing the type 2 messages 100(b).
The logic for M outbound message processing servers 1400 is given below:
loop forever
Wait on queue for next message id;
Receive messageid;
Pull out corresponding message from in memory message cache;
Invoke service for given message and wait for response; Send to SvcOverQ; end loop
In this way, both type 1 and a type 2 message are processed separately by the scheduler system as described in figure 5 and thereby fulfills the serialization requirement R1 and R2 along with optimal and lightweight load balancing.
ADVANTAGES OF THE INVENTION:
• The present invention provides a system and method for priority queuing with dynamic class switching and with allocation constraints.
• Provides and implements the scheduler design for priority scheduling.
• Enables to execute the messages in micro seconds with largest overhead being the queue read and queue write.
• Enables optimal load balancing of messages while preserving serialization constraints.
• Provides an optimal and lightweight method for prioritizing the scheduling of messages.
WE CLAIM:
1) A method for priority scheduling of messages and dynamic class switching of message types with restrictive serialization constraints; wherein the said method comprises the computer implemented steps of:
a) Storing an incoming message in the memory cache on receipt of the incoming message by a scheduler;
b) Assigning a baseid and messageid for the said messages by the first message queue of a dispatcher; wherein all the related messages have the same baseid and all the corresponding messages have a unique messageid.
c) Indexing the said messages based on the message type, assigned base id's and message id's by the first message queue of the dispatcher;
d) Verification of the base id's and message id's assigned to the said messages by the said dispatcher;
e) Checking the status of the outbound message processing server's by the second message queue;
f) Communicating the status of the outbound message processing server's as either "idle" or "processing" by the second message queue to the said dispatcher.
g) Assigning a messageid to the idle outbound message processing server;
h) Pulling out the message with the corresponding assigned messageid from the message cache
and either dispatching the same in the first come first serve order to the idle outbound message
processing server in case of no conflicts in the base id's. In case of conflict in base id's the said
message is dispatched to the pending list of the said dispatcher; i) Invoking a service call for a messageid on receipt of the corresponding message for the
messageid by the idle outbound message processing server; j) Communicating the change in status of the outbound message processing server from "idle" to
"processing" by the second message queue "SvcOverQ" to the said dispatcher; k) Communicating the completion of the processing of the messageid by the outbound message
processing server to the dispatcher by the second message queue "SvcOverQ". I) Switching to the pending message list of the dispatcher for processing the pending messages on
completion of all the messages with no serialization conflicts. m) Repeating steps d) to k) for optimal load balancing of messages with priority scheduling of
message types.
2) A method as claimed in claim 1, wherein the said incoming message may be a type 1 or type 2 message;
3) A method as claimed in claim 2, wherein the said type 1 message comprises of orders, order confirmation and order cancellation.
4) A method as claimed in claim 2, wherein the said type 2 message comprises of trade, trade confirmation and trade cancellations.
5) A method as claimed in claim 2, wherein the type 1 messages have high priority than type 2 messages.
6) A method as claimed in claim 2, wherein oniy the type 1 messages are processed in first come first serve order
7) A method as claimed in claim 1, wherein the said scheduler comprises of a message queue and a dispatcher.
8) A method as claimed in claim 1, wherein the said dispatcher comprises of a first message queue, second message queue and a pending message list.
9) A method as claimed in claim 1, wherein the said scheduler receives the said incoming messages and dispatches the same to the outbound message processing server for processing the said messages.
10) A method as claimed in claim 1, wherein each of the said outbound message processing server processes one message at a time.
11) A system for priority scheduling and dynamic class switching of message types with restrictive serialization constraints comprising:
a) a scheduler for receiving an incoming message; wherein the scheduler comprises of
(1) a memory cache for storing the incoming messages; wherein an incoming message comprises of a type 1 message and type 2 message.
(2) at least one outbound message processing server for processing the said incoming message;
(3) a dispatcher for executing computer implemented steps of dispatching the message to the said outbound message processing server; wherein the dispatcher further comprises of:
(a) first message queue for assigning a baseid and messageid to the said corresponding message; wherein all the related messages have the same baseid and all the corresponding messages have a unique message id.
(b) second message queue for communicating the status of the said outbound message processing server to the dispatcher for optimal load balancing of messages with priority scheduling of message types.
(c) Pending message list to store the list of pending type 1 messages.
12) A system as claimed in claim 11, wherein the said type 1 message comprises of orders, order confirmation and order cancellation.
13) A system as claimed in claim 11, wherein the said type 2 message comprises of trade, trade confirmation and trade cancellations.
14) A system as claimed in claim 11, wherein each of the said outbound message processing server processes one message at a time.
15) A method for priority scheduling of messages and dynamic class switching of message types with less restrictive serialization constraints; wherein the said method comprises the computer implemented steps of:
a) Storing an incoming message in the memory cache on receipt of the incoming message by a scheduler, wherein the said message is a type 1 or type 2 message;
b) Assigning a baseid and messageid for the said message type by the first message queue of a dispatcher; wherein all the related messages have the same baseid and all the corresponding messages have a unique messageid.
c) Indexing the said messages based on the message type, assigned base id's and message id's by the first message queue of the dispatcher;
d) Verification of the base id's and message id's assigned to the said messages by the said dispatcher;
e) Checking the status of the outbound message processing server's by the second message queue;
f) Communicating the status of the outbound message processing server's as either "idle" or "processing" by the second message queue to the said dispatcher.
g) Assigning a type 1 messageid to the idle outbound message processing server;
h) Pulling out the type 1 message with the corresponding assigned type 1 messageid from the message cache and further
• dispatching the type 1 message in the first come first serve order to the idle outbound message processing server in case of no conflicts in the base id's or
• dispatching the type 1 message to the pending list of the said dispatcher in case of conflict in base id's;
i) Invoking a service call for a type 1 messageid on receipt of the corresponding message for the
messageid by the idle outbound message processing server; j) Communicating the change in status of the outbound message processing server from "idle" to
"processing" by the second message queue to the said dispatcher; k) Communicating the completion of the processing of the type 1 messageid by the outbound
message processing server to the dispatcher by the second message queue ; I) Repeating steps d) to k) for processing all the remaining type 1 messages having no conflicts in
the base id's m) Switching to the pending message list of the dispatcher for processing the pending type 1
messages on completion of processing all the type 1 messages with no conflicts in the base id's; n) Repeating steps d) to k) for processing all the pending type 1 messages having conflicts in the
base id's o) Switching to the type 2 message list of the dispatcher for processing the type 2 messages on
completion of processing all the type 1 messages with and without conflicts in the base id's; p) Repeating steps d) to k) for optimal load balancing of messages with priority scheduling of
message types.
16) A method as claimed in claim 15, wherein the said type 1 message comprises of orders, order confirmation and order cancellation.
17) A method as claimed in claim 15, wherein the said type 2 message comprises of trade, trade confirmation and trade cancellations.
18) A method as claimed in claim 15, wherein the type 1 messages have high priority than type 2 messages.
19) A method as claimed in claim 15, wherein the type 1 pending messages have high priority for processing than the type 2 messages.
20) A method as claimed in claim 15, wherein the type 1 messages without any conflict for base id's have high priority for processing than the type 1 messages with conflict for base id's.
21) A method as claimed in claim 15, wherein only the type 1 messages are processed in first come first serve order
22) A method as claimed in claim 15, wherein the said scheduler receives the said incoming messages and dispatches the same to the outbound message processing server for processing the said messages.
23) A method as claimed in claim 15, wherein the said scheduler comprises of a message queue and a dispatcher.
24) A method as claimed in claim 15, wherein the said dispatcher comprises of a first message queue, second message queue, pending message list and a type 2 message list.
25) A method as claimed in claim 15, wherein each of the said outbound message processing server processes one message at a time.
26) A system for priority scheduling and dynamic class switching of message types with less restrictive serialization constraints comprising;
a) a scheduler for receiving an incoming message; wherein the scheduler comprises of
(1) a memory cache for storing the incoming messages; wherein an incoming message comprises of a type 1 message and type 2 message.
(2) at least one outbound message processing server for processing the said incoming message;
(3) a dispatcher for executing computer implemented steps of dispatching the message to the said outbound message processing server; wherein the dispatcher further comprises of:
(a) first message queue for assigning a baseid and messageid to the said corresponding message; wherein all the related messages have the same baseid and all the corresponding messages have a unique message id.
(b) second message queue for communicating the status of the said outbound message processing server to the dispatcher for optimal load balancing of messages with priority scheduling of message types;
(c) Pending message list to store the list of pending type 1 messages;
(d) Type 2 message list to store the list of type 2 messages.
27) A system as claimed in claim 26, wherein the said type 1 message comprises of orders, order
confirmation and order cancellation.
28) A system as claimed in claim 26, wherein the said type 2 message comprises of trade, trade confirmation and trade cancellations.
29) A system as claimed in claim 26, wherein each of the said outbound message processing server processes one message at a time.
30) A method for priority scheduling of messages and dynamic class switching of message types with both restrictive and less restrictive serialization constraints simultaneously; wherein the said method comprises the computer implemented steps of:
a) Storing an incoming message in the memory cache on receipt of the incoming message by a scheduler, wherein the said message is a type 1 or type 2 message;
b) Routing the type 1 message to the type 1 first message queue and type 2 message to the type 2 first message queue respectively by the router of the scheduler;
c) Assigning a baseid and messageid for the said type 1 message by the respective type 1 first message queue of the type 1 dispatcher; wherein all the related type 1 messages have the same baseid and all the corresponding type 1 messages have a unique messageid.
d) Indexing the said type 1 messages based on the assigned bae type 1 dispatcher;
e) Verification of the base id's and message id's assigned to the said type 1 messages by the said type 1 dispatcher;se id's and message id's by the respective type 1 first message queue of the respectiv
f) Checking the status of the type 1 outbound message processing server's by the type 1 second message queue;
g) Communicating the status of the type 1 outbound message processing server's as either "idle" or "processing" by the type 1 second message queue to the said type 1 dispatcher.
h) Assigning a type 1 messageid to the idle type 1 outbound message processing server; i) Pulling out the type 1 message with the corresponding assigned type 1 messageid from the message cache and further
• dispatching the type 1 message in the first come first serve order to the idle type 1 outbound message processing server in case of no conflicts in the base id's or
• dispatching the type 1 message to the pending list of the said type 1 dispatcher in case of conflict in base id's;
j) Invoking a service call for a type 1 messageid on receipt of the corresponding type 1 message for
the messageid by the idle type 1 outbound message processing server; k) Communicating the change in status of the type 1 outbound message processing server from
"idle" to "processing" by the type 1 second message queue "SvcOverQ" to the said type 1
dispatcher;
I) Communicating the completion of the processing of the type 1 messageid by the type 1 outbound
message processing server to the type 1 dispatcher by the second message queue. m) Repeating steps d) to k) for processing all the remaining type 1 messages having no conflicts in
the base id's n) Switching to the pending message list of the dispatcher for processing the pending type 1
messages on completion of processing all the type 1 messages with no conflicts in the base id's; o) Repeating steps d) to k) for processing all the pending type 1 messages having conflicts in the
base id's p) Simultaneously assigning a base id and message id for the said type 2 message by the
respective type 2 first message queue of the type 2 dispatcher; wherein all the related type 2
messages have the same baseid and all the corresponding type 1 messages have a unique
messageid. q) Indexing the said type 2 messages based on the assigned base id's and message id's by the
respective type 2 first message queue of the respective type 2 dispatcher; r) Checking the status of the type 2 outbound message processing server's by the type 2 second
message queue; s) Communicating the status of the type 2 outbound message processing server's as either "idle" or
"processing" by the type 2 second message queue "SvcOverQ" to the said type 2 dispatcher. t) Assigning a type 2 messageid to the idle type 2 outbound message processing server; u) Pulling out the type 2 message with the corresponding assigned type 2 messageid from the
message cache and further dispatching the type 2 message to the idle type 2 outbound message
processing server for processing the type 2 messages v) Invoking a service call for a type 2 messageid on receipt of the corresponding type 2 message for
the messageid by the idle type 2 outbound message processing server; w) Communicating the change in status of the type 2 outbound message processing server from
"idle" to "processing" by the type 2 second message queue "SvcOverQ" to the said type 2
dispatcher; x) Communicating the completion of the processing of the type 2 messageid by the type 2 outbound
message processing server to the type 2 dispatcher by the second message queue. y) Repeating steps p) to x) for processing ail the remaining type 2 messages simultaneously for
optimal load balancing of messages with priority scheduling of message types.
31) A method as claimed in claim 30, wherein the said type 1 message comprises of orders, order confirmation and order cancellation.
32) A method as claimed in claim 30, wherein the said type 2 message comprises of trade, trade confirmation and trade cancellations.
33) A method as claimed in claim 30, wherein the type 1 messages have high priority than type 2 messages.
34) A method as claimed in claim 30, wherein the type 1 messages without any conflict for base id's have high priority for processing than the type 1 messages with conflict for base id's.
35) A method as claimed in claim 30, wherein only the type 1 messages are processed in first come first serve order
36) A method as claimed in claim 30, wherein the said scheduler receives the said incoming messages and dispatches the same to the outbound message processing server for processing the said messages.
37) A method as claimed in claim 30, wherein the said scheduler comprises of a message queue and a dispatcher.
38) A method as claimed in claim 30, wherein the said dispatcher comprises of a first message queue, second message queue, pending message list and a type 2 message list.
39) A method as claimed in claim 30, wherein each of the said outbound message processing server processes one message at a time.
40) A system for priority scheduling and dynamic class switching of message types with both restrictive and less restrictive serialization constraints comprising:
a) a scheduler for receiving an incoming message; wherein the scheduler comprises of
(1) a memory cache for storing the incoming messages; wherein an incoming message comprises of a type 1 message and type 2 message.
(2) at least one outbound message processing server for processing the said incoming message;
(3) at least one dispatcher for dispatching the message to the said outbound message processing server; wherein the dispatcher further comprises of:
(a) at least one first message queue for assigning a baseid and messageid to the said corresponding message; wherein all the related messages have the same baseid and all the corresponding messages have a unique message id.
(b) At least one second message queue "SvcOverQ" for communicating the status of the said outbound message processing server to the dispatcher for optimal load balancing of messages with priority scheduling of message types;
(c) Pending message list to store the list of pending type 1 messages;
(d) Type 2 message list to store the list of type 2 messages.
41) A system as claimed in claim 40, wherein the said type 1 message comprises of orders, order confirmation and order cancellation.
42) A system as claimed in claim 40, wherein the said type 2 message comprises of trade, trade confirmation and trade cancellations.
43} A system as claimed in claim 40, wherein each of the said outbound message processing server processes one message at a time.
44) A system and method substantially as herein described with reference to and as illustrated by the accompanying drawings.
| # | Name | Date |
|---|---|---|
| 1 | Other Document [28-07-2016(online)].pdf | 2016-07-28 |
| 2 | Examination Report Reply Recieved [28-07-2016(online)].pdf | 2016-07-28 |
| 3 | Description(Complete) [28-07-2016(online)].pdf | 2016-07-28 |
| 4 | Claims [28-07-2016(online)].pdf | 2016-07-28 |
| 5 | Abstract [28-07-2016(online)].pdf | 2016-07-28 |
| 6 | 2517-MUM-2010-Correspondence to notify the Controller (Mandatory) [23-02-2018(online)].pdf | 2018-02-23 |
| 7 | 2517-MUM-2010-Written submissions and relevant documents (MANDATORY) [16-03-2018(online)].pdf | 2018-03-16 |
| 8 | Response to FER_2517-MUM-2010.pdf | 2018-08-10 |
| 9 | Response to FER 2517-MUM-2010-28 July 2016.pdf | 2018-08-10 |
| 10 | Amended Complete specification_Clean copy.pdf | 2018-08-10 |
| 11 | Amended Claims_Clean copy.pdf | 2018-08-10 |
| 12 | Amended Abstract_Clean copy.pdf | 2018-08-10 |
| 13 | abstract1.jpg | 2018-08-10 |
| 14 | 2517-MUM-2010_EXAMREPORT.pdf | 2018-08-10 |
| 15 | 2517-MUM-2010-HearingNoticeLetter.pdf | 2018-08-10 |
| 17 | 2517-mum-2010-form 3.pdf | 2018-08-10 |
| 18 | 2517-MUM-2010-FORM 3(29-2-2012).pdf | 2018-08-10 |
| 19 | 2517-MUM-2010-FORM 3(20-5-2011).pdf | 2018-08-10 |
| 20 | 2517-MUM-2010-FORM 3(14-8-2012).pdf | 2018-08-10 |
| 21 | 2517-MUM-2010-FORM 26(1-11-2010).pdf | 2018-08-10 |
| 22 | 2517-mum-2010-form 2.pdf | 2018-08-10 |
| 23 | 2517-mum-2010-form 2(title page).pdf | 2018-08-10 |
| 24 | 2517-MUM-2010-FORM 18.pdf | 2018-08-10 |
| 25 | 2517-mum-2010-form 1.pdf | 2018-08-10 |
| 26 | 2517-MUM-2010-FORM 1(1-11-2010).pdf | 2018-08-10 |
| 27 | 2517-mum-2010-drawing.pdf | 2018-08-10 |
| 28 | 2517-mum-2010-description(complete).pdf | 2018-08-10 |
| 29 | 2517-mum-2010-correspondence.pdf | 2018-08-10 |
| 30 | 2517-MUM-2010-CORRESPONDENCE(IPO)-(FER)-(28-7-2015).pdf | 2018-08-10 |
| 31 | 2517-MUM-2010-CORRESPONDENCE(29-2-2012).pdf | 2018-08-10 |
| 32 | 2517-MUM-2010-CORRESPONDENCE(20-5-2011).pdf | 2018-08-10 |
| 33 | 2517-MUM-2010-CORRESPONDENCE(14-8-2012).pdf | 2018-08-10 |
| 34 | 2517-MUM-2010-CORRESPONDENCE(1-11-2010).pdf | 2018-08-10 |
| 35 | 2517-mum-2010-claims.pdf | 2018-08-10 |
| 37 | 2517-mum-2010-abstract.pdf | 2018-08-10 |
| 39 | 2517-MUM-2010-Correspondence to notify the Controller (Mandatory) [14-11-2018(online)].pdf | 2018-11-14 |
| 40 | 2517-MUM-2010-Written submissions and relevant documents (MANDATORY) [10-12-2018(online)].pdf | 2018-12-10 |
| 41 | 2517-MUM-2010-Response to office action (Mandatory) [28-01-2019(online)].pdf | 2019-01-28 |
| 42 | 2517-MUM-2010-Response to office action (Mandatory) [31-01-2019(online)].pdf | 2019-01-31 |
| 43 | 2517-MUM-2010-PatentCertificate13-02-2019.pdf | 2019-02-13 |
| 44 | 2517-MUM-2010-IntimationOfGrant13-02-2019.pdf | 2019-02-13 |
| 45 | 2517-MUM-2010-RELEVANT DOCUMENTS [30-03-2020(online)].pdf | 2020-03-30 |
| 46 | 2517-MUM-2010-RELEVANT DOCUMENTS [23-09-2021(online)].pdf | 2021-09-23 |
| 47 | 2517-MUM-2010-RELEVANT DOCUMENTS [30-09-2022(online)].pdf | 2022-09-30 |
| 48 | 2517-MUM-2010-RELEVANT DOCUMENTS [27-09-2023(online)].pdf | 2023-09-27 |