Specification
A system for high reliability and high performance application message delivery
TECHNICAL FIELD
The present invention relates to the field of delivery of high volume of
electronic messages. A particularly advantageous but not limitative application
relates to airline billing transactions. In particular the invention relates to the
del ivery of high number of asynchronous messages (typically > 8,000
messages per seconds) containing for instance billing information over an
unreliable network to a plurality of log servers, where log files containing the
billing information for a known interval are created and processing of billing
information for the interval is performed on the log file with the lowest loss of
billing data.
BACKGROUND
In the known art it is sometimes necessary to transmit data across
unreliable networks or using asynchronous transmission protocols such as User
Datagram Protocol (UDP) as the throughput of such sessionless based network
transactions is higher than for example, a transaction based on Transmission
Control Protocol (TCP).
Consider Fig. 1, an example of a prior art system. Such a system based
on servers 103 can execute plurality of applications 123 which transmit billing
information . Because of the necessity of performing a number of billing
transactions which exceeds the capacity of network 105 using synchronous
transactions; or because of the corresponding loss of throughput as the
application waits for an acknowledgement of the receipt of a transaction, an
asynchronous message 125 is sent over network 105. The asynchronous
message is or is not received by a log server 127.
However, log server 127 is not a fault tolerant or high availability server
and is therefore considered unreliable 127. All messages which are received
are stored 129 in a file system 109 for processing by billing server 111 using
billing system 13 1.
It is understood that because of messaging protocol 125 used and
unreliability 127 of log server 107 transactions may be lost.
Thus, it is an object of the present invention to significantly improve the
reliability of the delivery of messages while increasing or at least maintaining
the throughput and while using non reliable networks.
BRIEF SUMMARY OF THE INVENTION
According to an aspect, the invention relates to a computer-implemented
method of providing high reliability and high performance application message
delivery. The method comprises the following steps performed with at least one
data processor:
at a plurality of log servers coupled to at least an application server, each
application server being associated to an application: receiving asynchronously,
from the at least one application server, application messages containing
application information for an application transaction, each application message
being received by at least some log servers among the plurality of log servers;
receiving asynchronously, from the at least one application server, control
messages at a predetermined interval, each control message being received by
at least some log servers among the plurality of log servers;
at each of the plurality of log servers: storing the received application
messages in a current application data file; storing the received control
messages in a control file and upon receiving an open-close control message,
closing the current application data file, storing said closed application data file
and creating a new application data file as the current application data file;
comparing the control files of the plurality of log servers for a given interval;
and
based on this comparison, determining from among a plurality of application
data files from each of the log servers, an application data file as a bestcandidate
for a given interval and forwarding the best-candidate file for post
processing.
Thus, in case some of the application messages forwarded by the
application servers are not received at some of the log servers, the invention
allows determining the application data file that is the most reliable and
discarding thereby the other application data files without requiring comparing
the application data files.
Optionally, the invention may comprise any one of the following facultative
features:
In one embodiment, each log server is coupled to a plurality of application
servers associated each to at least an application.
Advantageously, the control message comprises a number of application
messages transmitted by the application server. Advantageously, the control
message comprises an identifier that uniquely identifies the order of the control
messages in a sequence of control messages. Preferably, the identifier is a
control message number. Advantageously, the control message comprises at
least one of: an identifier of an application and a timestamp of the application
server. Preferably, each control message comprises an identifier of an
application and a timestamp of the application server.
Preferably, the interval for forwarding a control message is a given time
period.
Advantageously, the open/close control message is a Nth control message
in a sequence of control messages. In one embodiment, N is predetermined. In
one embodiment, the Nth control message is the fifth control message in the
sequence of control messages and the given time period is a two minute time
period.
Advantageously, the best-candidate file is chosen from a set of application
data files for a given interval from the plurality of log servers and that have the
same start and stop points. Preferably, the start and stop points are determined
by the reception of open/close control messages.
According to an advantageous embodiment, the best-candidate file is
chosen from among the chosen set of files, the file with the lowest application
message loss rate. According to an advantageous embodiment, in case some
application data files have the same number of application messages, then the
best-candidate file is chosen from among the application data files with the
lowest application message loss rate, the file with the lowest control message
loss rate.
In one embodiment, the best-candidate file having lost application messages
and not having lost more than x percent of application messages for the interval
is augmented by the lost application messages existing in other files of the set
of files, x being predetermined. In one embodiment, x is comprised between
fifteen and forty five.
Advantageously, upon determining from among a plurality of application
data files from each of the plurality of log servers, an application data file as a
best-candidate for a given interval, the server forwards the best-candidate file
for application processing.
In one embodiment, the application is an airline billing transaction
application.
According to another aspect, the invention relates to a non-transitory
computer-readable medium that contains software program instructions, where
execution of the software program instructions by at least one data processor
results in performance of operations that comprise execution of the method of
the invention.
Another aspect of the invention relates to a system for high reliability and
high performance application message delivery. The system comprises:
a plurality of log servers coupled to the output of at least one
application server;
each log server being configured to receive asynchronously, from the at least
one application server: application messages containing application information
and control messages;
each log server being also configured to: store the received application
messages in a current application data file and to store the received control
messages in a control file; and upon receiving an open-close control message,
to close the current application data file, to store said closed application data
file, to add said closed application data file to a plurality of application data files
and to create a new application data file as the current application data file;
a server coupled to the plurality of log servers, said server being
configured to: compare the control files of the plurality of log servers for a given
interval; based on this comparison, determine from among a plurality of
application data files from each of the log servers, an application data file as a
best-candidate for a given interval; and forward the best-candidate file for post
processing.
Optionally, the system comprises a plurality of application servers and a
plurality of applications executing on a processor of any of the plurality of
application servers, each of the application servers having an output coupled to
an input of each of the log servers.
According to another aspect the invention solves the issues of loss by
providing high reliabil ity and high performance billing message delivery,
forwarding asynchronously a billing message containing billing information for
an application transaction to each of a plurality of log servers; forwarding
asynchronously control messages to each of the plurality of log servers at a
predetermined interval; storing at each of the plurality of log servers received
billing messages in a current billing data file; storing at each of the plurality of
log servers received control messages in a control file and upon receiving a
open-close control message, closing the current billing data file, adding said
closed billing data file to a plurality of billing data files and creating a new billing
data file as the current billing data file; and determining from among a plurality
of billing data files from each of the log servers, a billing data file as a bestcandidate
for a given interval and forwarding the best-candidate file for billing
processing.
In accordance with a still further aspect of this invention there is
disclosed a system for high reliability and high performance application
message delivery, characterized in that it comprises:
at least one application executing at least part of an application
transaction on one application server,
a plurality of log servers coupled to the output of the at least one
application server;
the least one application server being configured to forward asynchronously to
each of the plurality of log servers: application messages containing information
for a transaction and control messages;
each log server being configured to: store the received application messages in
a current application data file and to store the received control messages in a
control file; and upon receiving an open-close control message, to close the
current application data file, to store said closed application data file and to
create a new application data file as the current application data file;
a server coupled to the plurality of log servers, said server being
configured to: compare the control files of the plurality of log servers for a given
interval; based on this comparison, determine from among a plurality of
application data files from each of the log servers, an application data file as a
best-candidate for a given interval; and forward the best-candidate file for post
processing.
Preferably, the application is a billing application executing at least part of
a billing transaction. Preferably, the application message is a billing message
containing data related to billing. Preferably, the application data file is a billing
data file.
Another aspect of the invention relates to a computer program product
comprising instructions capable of performing the steps of the method
according to the invention.
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
FIG. 1 is a system diagram of a prior art system.
FIG. 2 is a system diagram of the architecture of the present invention.
FIG. 3 is an illustration of the control message data structure.
FIG. 4 is a list of the types of control messages transmitted.
FIG. 5 is a flow chart of processing performed by an application sending billing
messages.
FIG. 6 is a flow chart of processing performed by a log server receiving
messages.
FIG. 7 is a flow chart of processing of a best-candidate log file.
FIG. 8 is a flowchart of improving the quality of a chosen best-candidate file.
FIG. 9 is simplified illustration of an exemplary embodiment where messages
are transmitted from an application server to a cluster of four log servers.
FIG. 10 is a diagram of the internal architecture of any of the servers of the
system.
DETAILED DESCRIPTION OF THE INVENTION
It is recalled that the present invention takes care of the delivery of the
notification messages sent by applications by using a cluster of log servers.
Each application notifies all the redundant log servers at once. Each log server
splits this stream of notification messages into separate, manageable files. The
system determines continuously the most reliable files in the cluster and
transfers that file to the recipient.
Optionally the invention may comprise any one of the following
advantageous but nevertheless facultative features.
The control message comprises an identifier of a billing application.
Preferably, , the control message also comprises any one of: a timestamp of the
application server, a number of control messages transmitted by the application
(for instance billing application) and a number of application messages (for
instance billing messages) transmitted by the application. Each application has
a defined interval for forwarding a control message in a given time period and
an open/close control message is every Nth control message in the sequence
of control messages. According to an advantageous embodiment, the Nth
control message is the fifth control message in the sequence of control
messages and the given time period is two minutes. The system chooses files
for processing by a billing system by creating a best-candidate file chosen from
a set of files for a given interval from the plurality of log servers that have a
same start and stop point. Preferably, the best-candidate file is, from among the
chosen set of files, the file with the lowest message loss rate. The bestcandidate
file can be augmented where messages are lost and not having lost
more than thirty percent of messages for the interval by copying in the lost
messages existing in files of the set of files other than the best-candidate file.
Preferably, upon determining from among a plurality of application data files
from each of the plurality of log servers, an application data file as a bestcandidate
for a given interval, the server forwards the best-candidate file for
application processing.
Fig. 2 illustrates a system implementing the preferred embodiment of the
invention . The system has a plurality of applications 201 (a)-(c). These
applications may be a plurality of different applications 201 (a)-(c) running on a
single server; a single application 201 (a) running on a plurality of servers 103;
or combination of multiple applications 201 (a)-201 (c) running on multiple
different servers. Each of applications 201 (a)-(c), forwards asynchronous
application messages to log servers 203(a)-(b). An application message
contains information related to the application. For instance, the application
message may comprise data related to any one of: billing, customer profile,
customer profile etc. It will be understood that while two log servers 203(a)-(b)
are illustrated, the number of log servers is preferably more than 2 . While
Applications have focused on airline billing transaction data, other types of
applications could forward other data in the system of the invention.
Each of log servers 203(a)-(b) has log server instance 205 and billing
plug-in 207 which write received application data messages into a current
application data file 209(a) and control messages to control file 211 . Thus the
same application message is sent from an application server 201 (a)-(c) to all
the log servers 203(a)-(b). Possibly all the log servers 201 (a)-(c) receive the
same appl ication message. However in practical , at least some of the
application messages may not be received by all the log servers 201 (a)-(c).
Control messages are sent at intervals described later in this document,
which cause current application data file 209(a) to be closed, creating a plurality
of application data files 209(b)-(c) each representative of billing application
messages of a given interval.
The purpose of the control messages is two-fold. First, these messages
are used to (re)synchronize the splitting of the application message stream into
files 209(a)-(c). It is crucial that each log server splits the stream at the same
points in the stream in order to create the synchronized files. Second, each
control message will be used by the correlation algorithm to select the bestcandidate
amongst the synchronized files. Therefore, a summary of the control
messages is stored in a control file 2 11.
A billing section comprising a correlation batch 2 13, correlation output
2 15 and sending batch 2 17 are responsible for determining the best-candidate
of application data files 209(a)-(c) and forwarding the best-candidate to the
billing framework 2 19 .
Fig. 3 illustrates the structure of a control message as transmitted and
stored in control file 2 11. The same control message is sent from an application
server 201 (a)-(c) to all the log servers 203(a)-(b). Possibly all the log servers
201 (a)-(c) receive the same control message. However in practical, at least
some of the control messages may not be received by all the log servers 201
(a)-(c).
Each control message sent by an application 201 (a)-(c) comprises
preferably at least four elements.
The control message comprises an appl ication identifier 303 that
identifies the application server that is the originator of the control message.
Each application 201 (a)-(c) has application identifier 303, which uniquely
identifies the application. This element allows different billed applications to
send data and control messages to the same cluster (i.e. log servers 203(a)-
(b)) . The cluster can easily separate the messages per source.
Timestamp 305 defines a single reference of time in the sending
application server. All the synchronization steps will be based on timestamp 305
of the sending application server. This avoids the clock discrepancies usually
found in a cluster of servers 203(a)-(b). This feature is all the most
advantageous as the number of application servers 201 (a), 201 (b), 201 (c) is
high.
Control message number 307 indicates the unique sequential identifier
for the current control message. This number allows the log server to know
whether the previous control message has been lost. For instance, if two control
messages successively received at a log server present control message
number 307 that differs by more than one increment, then it means that at least
one control message has not been received by said log server.
Application message number 309 indicates the number of application
messages sent by the application 201 (a)-(c). As each log server knows how
many application messages it has actually received, therefore, each log server
knows how many application messages were lost based upon this value.
Type of message 311 indicates the type of control message being
forwarded.
Fig. 4 illustrates the type of control messages that are forwarded from
applications 201 (a)-(c) to log servers 203(a)-(c). Application-Start 403 indicates
that an Application has started and therefore a new application data file should
be created. The corollary to such a control message is Application-Stop 409
which closes current application data file 209(a) when the application shuts
down.
Timer Interval 405 sends a message every X periods, where X is for
instance and preferably 2 minutes. Every Nth, for instance N=5 (five) checkpoint
sent by applications 201 (a)-(c), called a splitting checkpoint, is used by log
servers 203(a)-(b) to split the stream : the log server closes the current
application data file 209(a) and creates a new file where it will store the next set
of sequential application messages. Such a splitting creates a stop point in one
application data file and a start point in the new application data file. Thus, a
current application data file is closed when the Nth checkpoint control message
is received and a new application data file, that becomes the current application
data file, is then created.
Each start point and stop point is associated to a control message
number 307 which allows identifying the order of transmission. Therefore, it is
easy to identify the application data files having the same start points. It is also
easy to identify the application data files having the same stop points. The
comparison of the application data files of various log servers as well as the
splitting of the stream can therefore be easily achieved.
Since checkpoints can also be lost, log servers 203(a)-(b) uses the
control message number 307 of the control message to detect such a loss.
However, the control message number also informs the log server if a splitting
point has been missed [control message number modulo N = 0] Other values
than five could be used depending on the requirements of the system.
Where a non-splitting checkpoint is lost, the log servers 203(a)-(b) will
simply write that lost event to control file 2 1. Any lost event in control file 2 11
will decrease the reliability of the appropriate application data file.
In the event that a splitting checkpoint is lost, the log servers 203(a)-(b)
will close the current application data file 209(a) and open a new one (as if a
splitting checkpoint was received). However, the current application data file
and the new application data file will be out of synchronization, since they have
not been closed/opened at a splitting checkpoint. The control file 2 1 1 is
accordingly updated: a lost event for the missing checkpoint, the events of
creating the new file, together with the timestamp. There is no event for closing
the application data file. This will inform the correlation algorithm that a splitting
checkpoint was lost.
END-OF-PERIOD 407 control message is sent by applications 201 (a)-(c)
at a time determined by the application. Typically this control message is sent at
midnight for billing applications in order to separate two working days. Basically,
this message forces a complete resynchronization between the billed
application and the log server. All internal counters are set to zero and a new
application data file and a new control file is started. It is also understood that
END-OF-PERIOD could be some other period as multiple days, a week, month
or year.
Since every control message contains the current timestamp of the billed
application, it is now trivial to find out if an END-OF-PERIOD control message
has been lost: the date in the timestamp element sent by the billed application is
no longer the same as the last received date on the log server. In this case, the
log server simulates the reception of END-OF-PERIOD messages 407, sets all
internal counters to zero and starts a new application data file. Control file 2 11 is
updated as if a splitting checkpoint has been missed.
The types of control messages 403-409 as stored in control files 2 1
allow for splitting the stream of application messages into synchronized
application data files 209(a)-209(c). If no control messages are lost, all files will
be synchronized. When control messages are lost, a number of files in the
cluster will be out-of-synchronization: some file(s) will be closed/opened at a
different timestamp. In addition to the knowledge that files are synchronized, the
system is also informed about the correctness of each file. Both facts about the
application data files will be exploited by a correlation algorithm.
A billing server 2 19 as implemented in the system of the present
invention must receive by determining the best-candidate of log files 209(a)-(c)
on each of log servers 203(a)-(b). The decision of the best-candidate is done by
the correlation batch 2 13 .
The best-candidate selection is based on comparing the control file 2 11
of each log server 203(a)-(b). By not comparing the numerous and large
application data files 209(a)-(c), this step is executed in real time.
The system aligns the open file/close file events in different control files
2 11 of each log server 203(a)-(b). The alignment is based on the timestamp of
the events. A quorum of | ( + l ) / 2 is needed to agree on an alignment. The
alignment simply indicates the files for which the stream has been split on
identical points in time. In this nominal case, the system determines the bestcandidate
amongst the synchronized application data files 209(a)-(c) by
selecting the application data file that contains firstly the most messages and
secondly the least lost checkpoint messages.
If no quorum is reached, the system will prefer the files for which both an
open file event and a close file event is found . In case of a lost splitting
checkpoint, there is no close event registered in the control file 211 . It means
that the system will lower the quorum, but will still only consider the files that
received both the open file and close file events. The system defines the bestcandidate
based on firstly the number of application messages and secondly
the number of lost checkpoint messages.
In the extreme case where not a single file has a close file event, (this
means the splitting checkpoint was missed by all log servers), the system will
prefer the files firstly with the least lost checkpoint messages and secondly the
most messages.
It is important that the next open file events to consider must
chronologically follow the close event of the currently selected best-candidate in
order to avoid sending duplicate messages to the billing server 2 19
The system also improves the quality of the selected best-candidate by
retrieving a part of missing messages in other synchronized files. The
improvement is only done for synchronized files where the best-candidate has
lost less than x% of the messages (i.e. the number of received messages is
greater than ( 00-x)%). Advantageously, 15
Documents
Application Documents
| # |
Name |
Date |
| 1 |
5609-DELNP-2014-Correspondence to notify the Controller [07-09-2022(online)].pdf |
2022-09-07 |
| 1 |
IB304.pdf |
2014-07-11 |
| 2 |
5609-DELNP-2014-US(14)-HearingNotice-(HearingDate-08-09-2022).pdf |
2022-08-17 |
| 2 |
FORM-5.pdf |
2014-07-11 |
| 3 |
FORM-3.pdf |
2014-07-11 |
| 3 |
5609-DELNP-2014-CLAIMS [15-05-2020(online)].pdf |
2020-05-15 |
| 4 |
5609-DELNP-2014-COMPLETE SPECIFICATION [15-05-2020(online)].pdf |
2020-05-15 |
| 4 |
17422-82-SPECIFICATION.pdf |
2014-07-11 |
| 5 |
5609-DELNP-2014.pdf |
2014-07-26 |
| 5 |
5609-DELNP-2014-DRAWING [15-05-2020(online)].pdf |
2020-05-15 |
| 6 |
Revised Form-1 & Form-2.pdf |
2014-08-08 |
| 6 |
5609-DELNP-2014-FER_SER_REPLY [15-05-2020(online)].pdf |
2020-05-15 |
| 7 |
FORM-13.pdf |
2014-08-08 |
| 7 |
5609-DELNP-2014-OTHERS [15-05-2020(online)].pdf |
2020-05-15 |
| 8 |
5609-delnp-2014-Form-3-(26-12-2014).pdf |
2014-12-26 |
| 8 |
5609-DELNP-2014-FORM 3 [13-12-2019(online)].pdf |
2019-12-13 |
| 9 |
5609-delnp-2014-Correspondence Others-(26-12-2014).pdf |
2014-12-26 |
| 9 |
5609-DELNP-2014-FER.pdf |
2019-11-18 |
| 10 |
5609-delnp-2014-Assignment-(09-01-2015).pdf |
2015-01-09 |
| 10 |
5609-delnp-2014-Correspondence Others-(09-01-2015).pdf |
2015-01-09 |
| 11 |
5609-delnp-2014-Assignment-(09-01-2015).pdf |
2015-01-09 |
| 11 |
5609-delnp-2014-Correspondence Others-(09-01-2015).pdf |
2015-01-09 |
| 12 |
5609-delnp-2014-Correspondence Others-(26-12-2014).pdf |
2014-12-26 |
| 12 |
5609-DELNP-2014-FER.pdf |
2019-11-18 |
| 13 |
5609-DELNP-2014-FORM 3 [13-12-2019(online)].pdf |
2019-12-13 |
| 13 |
5609-delnp-2014-Form-3-(26-12-2014).pdf |
2014-12-26 |
| 14 |
5609-DELNP-2014-OTHERS [15-05-2020(online)].pdf |
2020-05-15 |
| 14 |
FORM-13.pdf |
2014-08-08 |
| 15 |
5609-DELNP-2014-FER_SER_REPLY [15-05-2020(online)].pdf |
2020-05-15 |
| 15 |
Revised Form-1 & Form-2.pdf |
2014-08-08 |
| 16 |
5609-DELNP-2014-DRAWING [15-05-2020(online)].pdf |
2020-05-15 |
| 16 |
5609-DELNP-2014.pdf |
2014-07-26 |
| 17 |
17422-82-SPECIFICATION.pdf |
2014-07-11 |
| 17 |
5609-DELNP-2014-COMPLETE SPECIFICATION [15-05-2020(online)].pdf |
2020-05-15 |
| 18 |
FORM-3.pdf |
2014-07-11 |
| 18 |
5609-DELNP-2014-CLAIMS [15-05-2020(online)].pdf |
2020-05-15 |
| 19 |
FORM-5.pdf |
2014-07-11 |
| 19 |
5609-DELNP-2014-US(14)-HearingNotice-(HearingDate-08-09-2022).pdf |
2022-08-17 |
| 20 |
IB304.pdf |
2014-07-11 |
| 20 |
5609-DELNP-2014-Correspondence to notify the Controller [07-09-2022(online)].pdf |
2022-09-07 |
Search Strategy
| 1 |
searchstrategy_18-11-2019.pdf |