Sign In to Follow Application
View All Documents & Correspondence

Methods And Systems For Merchant Aggregation Using Context Aware Neural Machine Translation Models

Abstract: ABSTRACT METHODS AND SYSTEMS FOR MERCHANT AGGREGATION USING CONTEXT-AWARE NEURAL MACHINE TRANSLATION MODELS Embodiments provide methods and systems for merchant aggregation using context-aware neural machine translation models. Method performed by server system includes receiving payment transaction data including merchant name from acquirer and converting characters of merchant name into embedding vectors. Method includes generating source hidden vectors corresponding to characters by applying neural machine translation (NMT) model over corresponding embedding vector and predicting aggregated merchant name using decoder. Current character of aggregated merchant name is predicted by iteratively performing: determining attention weights of characters based on source hidden vectors and current target hidden vector associated with current character, determining context vector based on attention weights, concatenating context vector with merchant contextual vector for obtaining context-aware encoded vector, determining attention mechanism hidden state vector based on context-aware encoded vector and current target hidden vector and providing attention mechanism hidden state vector as feedback into decoder for predicting next character of aggregated merchant name. FIG. 2

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
29 September 2020
Publication Number
13/2022
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
ipo@epiphanyipsolutions.com
Parent Application

Applicants

MASTERCARD INTERNATIONAL INCORPORATED
2000 Purchase Street, Purchase, NY 10577, United States of America

Inventors

1. Vikas Bishnoi
s/o Lt. Col. OP Bishnoi, 112/256 Vishnu Nagar,Digari, Jodhpur – 342001, Rajasthan, India
2. Gaurav Dhama
H. No 1551, Third Floor, Block-C, Sector 45, Near St Angel's Junior School, Gurgaon – 122003, Haryana, India
3. Ankur Arora
A 1003 Arvind Appt plot no 9 Dwarka Sector 19B, Delhi, India

Specification

Claims:CLAIMS
We claim:

1. A computer-implemented method for determining an aggregated merchant name, comprising:
receiving, by a server system, payment transaction data from an acquirer, the payment transaction data comprising at least a merchant name data field associated with a merchant;
converting, by the server system, a plurality of characters of the merchant name data field into a plurality of embedding vectors, each character of the plurality of characters being associated with an embedding vector of the plurality of embedding vectors;
generating, by the server system, a set of source hidden vectors corresponding to the plurality of characters, a source hidden vector associated with a character generated by applying a neural machine translation (NMT) model using an encoder over a corresponding embedding vector; and
predicting, by the server system, the aggregated merchant name using a decoder, wherein a current character associated with the aggregated merchant name is predicted by iteratively performing:
determining, by the server system, attention weights of the plurality of characters based, at least in part, on the set of source hidden vectors and a current target hidden vector associated with the current character,
determining, by the server system, a context vector based, at least in part, on the attention weights associated with the plurality of characters,
concatenating, by the server system, the context vector with a merchant contextual vector for obtaining a context-aware encoded vector,
calculating, by the server system, an attention mechanism hidden state vector based, at least in part, on the context-aware encoded vector and the current target hidden vector,
determining, by the server system, the current character based, at least in part, on the attention mechanism hidden state vector, and
providing, by the server system, the attention mechanism hidden state vector as a feedback into the decoder for predicting a next character associated with the aggregated merchant name.

2. The computer-implemented method as claimed in claim 1, wherein the server system is a payment server.

3. The computer-implemented method as claimed in claim 1, wherein the NMT model corresponds to a recurrent neural network (RNN) based encoder-decoder architecture.

4. The computer-implemented method as claimed in claim 1, further comprising:
classifying, by the server system, merchant locations associated with the aggregated merchant name into a group.

5. The computer-implemented method as claimed in claim 1, further comprising:
extracting, by the server system, a merchant name from the merchant name data field of the payment transaction data;
performing, by the server system, data pre-processing over the merchant name for filtering one or more noise characters; and
segmenting, by the server system, the merchant name into the plurality of characters.

6. The computer-implemented method as claimed in claim 1, wherein determining the current character based, at least in part, on the attention mechanism hidden state vector comprises:
generating, by the server system, a probability distribution of characters based at least on the attention mechanism hidden state vector associated with the current character, the probability distribution indicating a selection probability value of a character being selected as the current character for the aggregated merchant name; and
selecting, by the server system, a character having a selection probability value greater than a predetermined threshold value, as the current character for the aggregated merchant name.

7. The computer-implemented method as claimed in claim 1, further comprising:
generating, by the server system, the merchant contextual vector based, at least in part, on merchant-specific information included in the payment transaction data, wherein the merchant contextual vector is utilized for differentiating similar merchant names.

8. The computer-implemented method as claimed in claim 7, wherein the merchant-specific information comprises:
an industry code;
merchant category code (MCC); and
an average transaction size of the merchant.

9. A server system, comprising:
a communication interface;
a memory comprising executable instructions; and
a processor communicably coupled to the communication interface, the processor configured to execute the executable instructions to cause the server system to at least:
receive payment transaction data from an acquirer, the payment transaction data comprising at least a merchant name data field associated with a merchant;
convert a plurality of characters of the merchant name data field into a plurality of embedding vectors, each character of the plurality of characters being associated with an embedding vector of the plurality of embedding vectors;
generate a set of source hidden vectors corresponding to the plurality of characters, a source hidden vector associated with a character generated by applying a neural machine translation (NMT) model over a corresponding embedding vector; and
predict an aggregate merchant using a decoder, wherein a current character associated with an aggregated merchant name is predicted by iteratively performing:
determining attention weights of the plurality of characters, based at least in part, on the set of source hidden vectors and a current target hidden vector associated with the current character,
determining a context vector based at least in part on the attention weights associated with the plurality of characters,
concatenating the context vector with a merchant contextual vector for obtaining a context-aware encoded vector,
calculating an attention mechanism hidden state vector based, at least in part, on the context-aware encoded vector and the current target hidden vector,
determining the current character based, at least in part, on an attention mechanism hidden state vector, and
providing the attention mechanism hidden state vector as a feedback for predicting a next character associated with the aggregated merchant name.

10. The server system as claimed in claim 9, wherein the server system is a payment server.

11. The server system as claimed in claim 9, wherein the NMT model corresponds to a recurrent neural network (RNN) based encoder-decoder architecture.

12. The server system as claimed in claim 9, wherein the server system is further caused at least in part to classify merchant locations associated with the aggregated merchant name into a group.

13. The server system as claimed in claim 9, wherein the server system is further caused at least in part to:
extract a merchant name from the merchant name data field of the payment transaction data;
perform data pre-processing over the merchant name for filtering one or more noise characters; and
segment the merchant name into a plurality of characters.

14. The server system as claimed in claim 9, wherein, to determine the current character, the server system is further caused at least in part to:
generate a probability distribution of characters based at least on the attention mechanism hidden state vector associated with the current character, the probability distribution indicating a selection probability value of a character being selected as the current character for the aggregated merchant name; and
select a character having a selection probability value greater than a predetermined threshold value, as the current character for the aggregated merchant name.

15. The server system as claimed in claim 9, wherein the server system is further caused at least in part to:
generate the merchant contextual vector based, at least in part, on merchant-specific information included in the payment transaction data, wherein the merchant contextual vector is utilized for differentiating similar merchant names.

16. The server system as claimed in claim 15, wherein the merchant-specific information comprises:
an industry code;
merchant category code (MCC); and
an average transaction size of the merchant.

17. A system for determining an aggregated merchant name, the system comprising:
an encoder configured to:
receive a plurality of embedding vectors for a plurality of characters associated with a merchant name data field in payment transaction data, the payment transaction data received from an acquirer, and
generate a set of source hidden vectors corresponding to the plurality of characters by applying a neural machine translation (NMT) model over each embedding vector of the plurality of embedding vectors; and
a decoder configured to predict the aggregated merchant name; and
an attention layer, wherein, at each iteration of predicting a current character of the aggregated merchant name, the attention layer is configured to:
determine attention weights associated with the plurality of characters, based at least in part, on the set of source hidden vectors and a current target hidden vector associated with the current character received from the decoder,
determine a context vector based at least in part on the attention weights associated with the plurality of characters, and
concatenate the context vector with a merchant contextual vector for obtaining a context-aware encoded vector,
wherein the decoder is configured to:
calculate an attention mechanism hidden state vector based, at least in part, on the context-aware encoded vector and the target hidden vector,
determine the current character based, at least in part, on an attention mechanism hidden state vector, and
provide the attention mechanism hidden state vector as a feedback for predicting a next character associated with the aggregated merchant name.

18. The system as claimed in claim 17, wherein the NMT model corresponds to a recurrent neural network (RNN) based encoder-decoder architecture.

19. The system as claimed in claim 17, wherein the attention layer is further configured to:
generate the merchant contextual vector based, at least in part, on merchant-specific information included in the payment transaction data, wherein the merchant contextual vector is utilized for differentiating similar merchant names.

20. The system as claimed in claim 19, wherein the merchant-specific information comprises:
an industry code;
merchant category code (MCC); and
an average transaction size of the merchant.
, Description:
FORM 2
THE PATENTS ACT 1970
(39 of 1970)
&
The Patent Rules 2003
COMPLETE SPECIFICATION
(refer section 10 & rule 13)

TITLE OF THE INVENTION:
METHODS AND SYSTEMS FOR MERCHANT AGGREGATION USING CONTEXT-AWARE NEURAL MACHINE TRANSLATION MODELS

APPLICANT(S):

Name:

Nationality:

Address:

MASTERCARD INTERNATIONAL INCORPORATED

United States of America

2000 Purchase Street, Purchase, NY 10577, United States of America

PREAMBLE TO THE DESCRIPTION

The following specification particularly describes the invention and the manner in which it is to be performed.

DESCRIPTION
(See next page)


METHODS AND SYSTEMS FOR MERCHANT AGGREGATION USING CONTEXT-AWARE NEURAL MACHINE TRANSLATION MODELS

TECHNICAL FIELD
The present disclosure relates to artificial intelligence processing systems and, more particularly to, electronic methods and complex processing systems for determining an aggregated merchant name associated with payment transaction data by utilizing machine learning techniques.

BACKGROUND
With ever-increasing advancement in payment technology, the amount of transaction data available has increased manifold. The transaction data houses meaningful information that provides detailed insights into business and stakeholders. Data analytics of the transaction data may provide a range of information such as, patterns in sales that may be used for strategizing, chalking business plans, devise marketing plans for improving the business. Merchant information forms a vital part of the transaction data. The merchant information includes a plurality of merchant attributes such as merchant name, merchant location, and a merchant identifier.
In general, the transaction data received from acquirers are organized and stored in a database. During the organization, transaction data associated with different merchant locations of the same merchant are aggregated together. For example, stores at different geographical location associated with a brand are grouped. However, the transaction data received by an acquirer from multiple Point of Sale (POS) terminals display variations of merchant name and/or the geographical location of the merchant. This may arise due to errors in initializing or calibrating the POS terminals at the merchant side. Such variations in merchant names affect the organization of the transaction data in the database.
Conventionally, such aggregation of merchant locations is performed using a rule-based system. The rule-based system employs an n-request process and includes a lot of manual efforts. The aggregation of unconventional merchant names that includes emoticons, pictures, and special characters using the rule-based system gets even more challenging due to optimization problems. Further, the rule-based system involves usage of third-party data and hand-written rules to include the merchant locations to the aggregate merchant that results in false positives during the aggregation process.
In view of the above discussion, there exists a technological need for generating an aggregated merchant name for different merchant locations based on their merchant name in an efficient way.

SUMMARY
Various embodiments of the present disclosure provide methods and systems for merchant aggregation using context-aware neural machine translation models.
In an embodiment, a computer-implemented method for determining an aggregated merchant name is disclosed. The computer-implemented method performed by a server system includes receiving payment transaction data from an acquirer. The payment transaction data includes at least a merchant name data field associated with a merchant. The computer-implemented method includes converting a plurality of characters of the merchant name data field into a plurality of embedding vectors. Each character of the plurality of characters is associated with an embedding vector of the plurality of embedding vectors. The computer-implemented method includes generating a set of source hidden vectors corresponding to the plurality of characters. A source hidden vector associated with a character is generated by applying a neural machine translation (NMT) model using an encoder over a corresponding embedding vector. The computer-implemented method includes predicting the aggregated merchant name using a decoder. A current character associated with the aggregated merchant name is predicted by iteratively performing: determining attention weights of the plurality of characters based, at least in part, on the set of source hidden vectors and a current target hidden vector associated with the current character, determining a context vector based, at least in part, on the attention weights associated with the plurality of characters, concatenating the context vector with a merchant contextual vector for obtaining a context-aware encoded vector, calculating an attention mechanism hidden state vector based, at least in part, on the context-aware encoded vector and the current target hidden vector, determining the current character based, at least in part, on the attention mechanism hidden state vector, and providing the attention mechanism hidden state vector as a feedback into the decoder for predicting a next character associated with the aggregated merchant name.
In another embodiment, a server system is disclosed. The server system includes a communication interface, a memory comprising executable instructions and a processor communicably coupled to the communication interface. The processor is configured to execute the executable instructions to cause the server system to perform at least receive payment transaction data from an acquirer. The payment transaction data includes at least a merchant name data field associated with a merchant. The server system is further caused to convert a plurality of characters of the merchant name data field into a plurality of embedding vectors. Each character of the plurality of characters is associated with an embedding vector of the plurality of embedding vectors. The server system is further caused to generate a set of source hidden vectors corresponding to the plurality of characters. A source hidden vector associated with a character is generated by applying a neural machine translation (NMT) model over a corresponding embedding vector. The server system is further caused to predict an aggregate merchant using a decoder. A current character associated with the aggregated merchant name is predicted by iteratively performing: determining attention weights of the plurality of characters based, at least in part, on the set of source hidden vectors and a current target hidden vector associated with the current character, determining a context vector based, at least in part, on the attention weights associated with the plurality of characters, concatenating the context vector with a merchant contextual vector for obtaining a context-aware encoded vector, calculating an attention mechanism hidden state vector based, at least in part, on the context-aware encoded vector and the current target hidden vector, determining the current character based, at least in part, on the attention mechanism hidden state vector, and providing the attention mechanism hidden state vector as a feedback into the decoder for predicting a next character associated with the aggregated merchant name.
In yet another embodiment, a system for determining an aggregated merchant name is disclosed. The system includes an encoder, an attention layer and a decoder. The encoder is configured to receive a plurality of embedding vectors for a plurality of characters associated with a merchant name data field in payment transaction data and generate a set of source hidden vectors corresponding to the plurality of characters by applying a neural machine translation (NMT) model over each embedding vector of the plurality of embedding vectors. The payment transaction data is received from an acquirer. The decoder is configured to predict the aggregate merchant name. At each iteration of predicting a current character of the aggregated merchant name, the attention layer is configured to determine attention weights associated with the plurality of characters, based at least in part, on the set of source hidden vectors and a current target hidden vector associated with the current character received from the decoder, determine a context vector based at least in part on the attention weights associated with the plurality of characters, and concatenate the context vector with a merchant contextual vector for obtaining a context-aware encoded vector. The decoder is configured to calculate an attention mechanism hidden state vector based, at least in part, on the context-aware encoded vector and the target hidden vector, determine the current character based, at least in part, on an attention mechanism hidden state vector, and provide the attention mechanism hidden state vector as a feedback for predicting a next character associated with the aggregated merchant name.
Other aspects and example embodiments are provided in the drawings and the detailed description that follows.

BRIEF DESCRIPTION OF THE FIGURES
For a more complete understanding of example embodiments of the present technology, reference is now made to the following descriptions taken in connection with the accompanying drawings in which:
FIG. 1 illustrates an example representation of an environment, in which at least some example embodiments of the present disclosure can be implemented;
FIG. 2 illustrates a simplified block diagram of a server system, in accordance with an example embodiment;
FIGS. 3A and 3B, collectively, represent a schematic block diagram representation of a merchant aggregation system for determining an aggregated merchant name from merchant name data field present in the payment transaction data, in accordance with an example embodiment;
FIG. 4 illustrates a simplified block diagram of a Neural Machine Translation (NMT) architecture with attention mechanism, in accordance with an example embodiment;
FIG. 5A is an example representation of a table depicting merchant name and location information of the plurality of merchants extracted from the payment transaction data, in accordance with an example embodiment of the present disclosure;
FIG. 5B is an example representation of a merchant aggregation table depicting grouping of all merchant locations with their respective aggregated merchant name, in accordance with an example embodiment of the present disclosure;
FIGS. 6A and 6B, collectively, represent a flow chart of a process flow of determining an aggregated merchant name associated with payment transaction data, in accordance with an example embodiment of the present disclosure;
FIGS. 7A and 7B, collectively, represent a flow diagram of a method for generating an aggregated merchant name, in accordance with an example embodiment of the present disclosure;
FIG. 8 is a simplified block diagram of a payment server, in accordance with an example embodiment of the present disclosure; and
FIG. 9 is a simplified block diagram of an acquirer server, in accordance with an example embodiment of the present disclosure.
The drawings referred to in this description are not to be understood as being drawn to scale except if specifically noted, and such drawings are only exemplary in nature.

DETAILED DESCRIPTION
In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure. It will be apparent, however, to one skilled in the art that the present disclosure can be practiced without these specific details.
Reference in this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. The appearance of the phrase “in an embodiment” in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Moreover, various features are described which may be exhibited by some embodiments and not by others. Similarly, various requirements are described which may be requirements for some embodiments but not for other embodiments.
Moreover, although the following description contains many specifics for the purposes of illustration, anyone skilled in the art will appreciate that many variations and/or alterations to said details are within the scope of the present disclosure. Similarly, although many of the features of the present disclosure are described in terms of each other, or in conjunction with each other, one skilled in the art will appreciate that many of these features can be provided independently of other features. Accordingly, this description of the present disclosure is set forth without any loss of generality to, and without imposing limitations upon, the present disclosure.
The term "acquirer" is an organization that transmits a purchase transaction to a payment card system for routing to the issuer of the payment card account in question. Typically, the acquirer has an agreement with merchants, wherein the acquirer receives authorization requests for purchase transactions from the merchants, and routes the authorization requests to the issuers of the payment cards being used for the purchase transactions. The terms “acquirer”, “acquiring bank”, “acquiring bank” or “acquirer bank” will be used interchangeably herein. Further, one or more server systems associated with the acquirer are referred to as "acquirer server" to carry out its functions.
The term "payment network", used herein, refers to a network or collection of systems used for transfer of funds through use of cash-substitutes. Payment networks may use a variety of different protocols and procedures in order to process the transfer of money for various types of transactions. Transactions that may be performed via a payment network may include product or service purchases, credit purchases, debit transactions, fund transfers, account withdrawals, etc. Payment networks may be configured to perform transactions via cash-substitutes, which may include payment cards, letters of credit, checks, financial accounts, etc. Examples of networks or systems configured to perform as payment networks include those operated by such as, Mastercard®.
The term "merchant", used throughout the description generally refers to a seller, a retailer, a purchase location, an organization, or any other entity that is in the business of selling goods or providing services, and it can refer to either a single business location, or a chain of business locations of the same entity. Further, the term "aggregated merchant name", used throughout the description, refers to a standard merchant name of a merchant despite variations shown by different franchisee outlets or different merchants (merchant at different geographical locations). The information associated with such aggregated merchant is ‘pre-defined’ and stored in a database available at a server system.

OVERVIEW
Various embodiments of the present disclosure provide methods, systems electronic devices and computer program products for performing merchant aggregation using context-aware neural machine translation (NMT) techniques. More specifically, embodiments of the present disclosure determines an aggregated merchant name associated with a merchant using payment transactions that are initiated at the merchant. Such techniques for generating the aggregated merchant name aid in classifying payment transactions performed at different merchant locations of the merchant with the aggregated merchant name.
In an example, the present disclosure describes a server system that determines the aggregated merchant name associated with payment transaction data. The server system includes at least a processor and a memory. In one non-limiting example, the server system is a payment server. The server system is configured to receive payment transaction data from an acquirer. The payment transaction data includes, but is not limited to, merchant name and merchant location data fields. The merchant name data field includes a merchant name of a merchant and a lot of noise and junk characters/numbers which may be a deterrent to identifying/classifying the merchant based on the merchant name. The noise and junk characters/numbers may be due to data transmission errors or due to initialization and calibration errors of Point-Of-Sale (POS) terminals at merchant locations. In an embodiment, the server system is configured to perform data pre-processing over the merchant name for filtering one or more noise characters and segment the merchant name into a plurality of characters.
The server system is configured to convert a plurality of characters of the merchant name data field into a plurality of embedding vectors using character-level embedding algorithm. Each character of the plurality of characters is associated with an embedding vector.
In one embodiment, the server system is configured to employ recurrent neural network (RNN) based with encoder-decoder architecture with attention mechanism. The server system is configured to generate a set of source hidden vectors corresponding to the plurality of characters. A source hidden vector associated with a character is generated by applying a neural machine translation (NMT) model using an encoder over a corresponding embedding vector. The server system is configured to predict the aggregated merchant name using a decoder. Each character associated with the aggregated merchant name is predicted by performing particular steps.
For predicting a current character, the server system is configured to determine attention weights of the plurality of characters based, at least in part, on the set of source hidden vectors, a current target hidden vector associated with the current character and a preceding character of the current character. Thereafter, the server system is configured to determine a context vector based, at least in part, on the attention weights associated with the plurality of characters. The context vector is concatenated with a merchant contextual vector for obtaining a context-aware encoded vector. In one embodiment, the server system is configured to generate the merchant contextual vector based, at least in part, on merchant-specific information included in the payment transaction data. The merchant contextual vector is utilized for differentiating similar merchant names. In one non-limiting example, the merchant-specific information includes, but is not limited to, an industry code, a merchant category code (MCC), an average transaction size of the merchant (i.e., sales volume of the merchant), etc.
In one embodiment, the server system is configured to calculate an attention mechanism hidden state vector based on the context-aware encoded vector and the current target hidden vector and determine the current character based on the attention mechanism hidden state vector. In one embodiment, the server system is configured to generate a probability distribution of characters based at least on the attention mechanism hidden state vector associated with the current character. The probability distribution indicates a selection probability value of a character being selected as the current character for the aggregated merchant name. A character having a selection probability value greater than a predetermined threshold value is selected as the current character for the aggregate merchant name.
After predicting the aggregated merchant name, the server system is configured to classify all merchant locations associated with the aggregated merchant name into a group. More specifically, all payment transactions associated with a same aggregated merchant name are grouped together even if they are from different merchant locations. Thereafter, the server system is configured to generate a merchant aggregation table based on the grouping.
Without in any way limiting the scope, interpretation, or application of the claims appearing below, technical effects of one or more of the example embodiments disclosed herein is to determine an aggregate merchant name associated with at least one merchant and aggregate payment transactions received from the at least one merchant to corresponding aggregated merchant name data stores. Further, the present disclosure allows servers to automatically aggregate and store payment transactions of merchants in memory locations corresponding to their aggregated merchant name in a merchant database, thereby eliminating inaccurate entries in a memory and improving data accuracy and payment processing speed.
Additionally, the present disclosure utilizes an NMT modeling technique for determining an aggregated merchant name of the merchant. The system employs a character level encoding instead of word level embedding and therefore every merchant name can be represented in a vector form even if the words are out-of-vocabulary words. Moreover, as most merchant names are unconventional and include fanciful words that include emoticons, common words spelt in an uncommon way with special characters and coined words based on business, the character level encoding of the merchant name helps in aggregating payment transaction data from merchants at different merchant location together easily. Further, utilization of attention mechanism with the NMT modeling technique helps in better learning of the sequence of characters in the merchant name and thereby results in a more accurate prediction of aggregated merchant names.
Additionally, the context-aware encoded vector based on attention mechanism and merchant-specific information provided to NMT based decoder helps in differentiating a merchant of one category/industry from another merchant with a similar name in a different category/industry. Moreover, the context-aware encoded vector helps the NMT based decoder to differentiate between an aggregated merchant name of a low ticket size and a similar aggregated merchant name of a high ticket size operating within the same category/industry.
Various example embodiments of the present disclosure are described hereinafter with reference to FIGS. 1 to 9.
FIG. 1 illustrates an exemplary representation of an environment 100 related to at least some example embodiments of the present disclosure. Although the environment 100 is presented in one arrangement, other embodiments may include the parts of the environment 100 (or other parts) arranged otherwise depending on, for example, aggregating merchants, etc. The environment 100 generally includes a plurality of entities, for example, an acquirer server 102, a payment network 104 including a payment server 106, each coupled to, and in communication with (and/or with access to) a network 110. The network 110 may include, without limitation, a light fidelity (Li-Fi) network, a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a satellite network, the Internet, a fiber optic network, a coaxial cable network, an infrared (IR) network, a radio frequency (RF) network, a virtual network, and/or another suitable public and/or private network capable of supporting communication among the entities illustrated in FIG. 1, or any combination thereof.
Various entities in the environment 100 may connect to the network 110 in accordance with various wired and wireless communication protocols, such as Transmission Control Protocol and Internet Protocol (TCP/IP), User Datagram Protocol (UDP), 2nd Generation (2G), 3rd Generation (3G), 4th Generation (4G), 5th Generation (5G) communication protocols, Long Term Evolution (LTE) communication protocols, or any combination thereof. For example, the network 110 may include multiple different networks, such as a private network made accessible by the payment network 104 to the acquirer server 102 and the payment server 106, separately, and a public network (e.g., the Internet etc.).
The environment 100 also includes a server system 108 configured to perform one or more of the operations described herein. In one example, the server system 108 is a payment server 106. In general, the server system 108 is configured to perform merchant name cleansing from payment transaction data and aggregating payment transactions of all merchant locations to their aggregated merchant name. The server system 108 is a separate part of the environment 100, and may operate apart from (but still in communication with, for example, via the network 110) the acquirer server 102, the payment server 106, and any third party external servers (to access data to perform the various operations described herein). However, in other embodiments, the server system 108 may actually be incorporated, in whole or in part, into one or more parts of the environment 100, for example, the payment server 106. In addition, the server system 108 should be understood to be embodied in at least one computing device in communication with the network 110, which may be specifically configured, via executable instructions, to perform as described herein, and/or embodied in at least one non-transitory computer readable media.
In one embodiment, the acquirer server 102 is associated with a financial institution (e.g., a bank) that processes financial transactions. This can be an institution that facilitates the processing of payment transactions for physical stores, merchants, or an institution that owns platforms that make online purchases or purchases made via software applications possible (e.g., shopping cart platform providers and in-app payment processing providers). The terms “acquirer”, “acquiring bank”, “acquiring bank” or “acquirer server” will be used interchangeably herein.
In one embodiment, a plurality of merchants 112a, 112b and 112c is associated with the acquirer server 102. The plurality of merchants 112a-112c may be physical stores such as retail establishment or a merchant facilitated e-commerce website interface (online store). The plurality of merchants 112a, 112b, and 112c hereinafter is collectively represented as "the merchant 112".
To accept payment transactions from customers, the merchant 102 normally establishes an account with a financial institution (i.e., “acquirer server 102”) that is part of the financial payment system. Account details of the merchant accounts established with the acquirer bank are stored in merchant profiles of the merchants in a memory of the acquirer server 102 or on a cloud server associated with the acquirer server 102. In one embodiment, the payment transaction may be associated with card present or card-not present transaction types. It shall be noted that all the merchants 112a-112c may not be associated with a single acquirer and the merchants may establish financial accounts with different acquirers and thereby payment transactions may be facilitated by more than one acquirer server and has not been explained herein for the sake of brevity.
In one embodiment, the merchant 112 has a payment transaction terminal (not shown in figures) that communicates directly or indirectly with the acquirer server 102. Examples of the payment transaction terminal may include, but not limited to, a Point-of-Sale (POS) terminal, and a customer device with a payment gateway application. The POS terminal is usually located at stores or facilities of the merchant 112. The merchant 112 can have more than one payment transaction terminal. In one embodiment, a customer may perform a payment transaction using the customer device (i.e., the mobile phone) which conforms to an e-commerce payment transaction.
In one example, a customer purchases a good or service from the merchant 112 using a payment card. The customer may utilize the payment card to effectuate payment by presenting/swiping the payment card to the POS terminal. Upon presentation of the physical or virtual payment card, account details (i.e., account number) are accessed by the POS terminal. The POS terminal sends payment transaction details to the acquirer server 102. The acquirer server 102 sends a payment transaction request to the server system 108 or the payment server 106 for routing the payment transaction to a card issuer associated with the customer. The payment transaction request includes a plurality of data elements. The plurality of data elements may include, but is not limited to, BIN of the card issuer of the payment card, a payment transaction identifier, a payment transaction amount, a payment transaction date/time, a payment transaction terminal identifier, a merchant name and location, an acquirer identifier etc. In one embodiment, the payment transaction request may be an electronic message that is sent via the server system 108 or the payment server 106 to the card issuer of the payment card to request authorization for a payment transaction. The payment transaction request may comply with a message type defined by an International Organization for Standardization (ISO) 8583 standard, which is a standard for systems that exchange electronic transaction information associated with payments made by users using the payment card, or the payment account.
In one example, an ISO 8583 transaction message may include one or more data elements that store data usable by the server system 108 to communicate information such as transaction requests, responses to transaction requests, inquiries, indications of fraud, security information, or the like. For example, the ISO 8583 message may include a PAN in the second data field (also known as DE2), an amount of a transaction in DE4, a date of settlement in DE15, a location of merchant 112 in DE41, DE42, and/or DE43, or the like. In particular, the acquirer server 102 transmits merchant name, location, city, and country code in the DE 43 data element.
The card issuer approves or denies an authorization request, and then routes, via the payment network 104, an authorization response back to the acquirer server 102. The acquirer server 102 sends approval to the POS terminal of the merchant 112. Thereafter, seconds later, the customer completes the purchase and receives a receipt.
In one embodiment, the server system 108 stores merchant information in a database 114 for reporting and data analysis. In one embodiment, the database 114 is a central repository of data which is created by storing payment transaction data from transactions occurring within acquirers and issuers associated with the payment network 104. The database 114 stores real time payment transaction data of a plurality of merchants. The payment transaction data may include, but not limited to, transaction attributes, such as, merchant name, merchant identifier, merchant category code (MCC), transaction amount, source of funds such as bank or credit cards, transaction channel used for loading funds such as POS terminal, payment transaction location information, external data sources and other internal data to evaluate each payment transaction. In one embodiment, the server system 108 stores, reviews, and/or analyzes information used in merchant aggregation.
In one embodiment, the server system 108 extracts merchant name and location from transaction data. Sometimes, the merchant name received from the transaction data is non-aggregated or in a raw text form. In one embodiment, the server system 108 is configured to aggregate different merchant locations of brands to their respective aggregated merchant name in a fully automated manner using machine learning models.
The server system 108 is configured to perform one or more of the operations described herein. In particular, the server system 108 is configured to generate an aggregated merchant name from a merchant name detected in the payment transaction data. For instance, the payment transaction data includes a merchant name data element (i.e., “DE43 field”) indicating a merchant name of the merchant 112. The merchant name data element may include noise and/or junk characters, for example, “*12#Zax2by” as the merchant name of the merchant 112. The server system 108 is configured to determine an aggregated merchant name corresponding to the merchant 112 based on the merchant name data element received in the payment transaction data. More specifically, the server system 108 determines the aggregated merchant name of the merchant 112 as “ZaxbY” from the merchant name data field “*12#Zax2by” present in the payment transaction data. In one embodiment, the server system 108 aggregates all merchant locations associated with the same aggregated merchant name (e.g., “ZaxbY”) together.
In one embodiment, the payment network 104 may be used by the payment cards issuing authorities as a payment interchange network. The payment network 104 may include a plurality of payment servers such as, the payment server 106. Examples of payment interchange network include, but are not limited to, Mastercard® payment system interchange network. The Mastercard® payment system interchange network is a proprietary communications standard promulgated by Mastercard International Incorporated® for the exchange of financial transactions among a plurality of financial activities that are members of Mastercard International Incorporated®. (Mastercard is a registered trademark of Mastercard International Incorporated located in Purchase, N.Y.).
The number and arrangement of systems, devices, and/or networks shown in FIG. 1 are provided as an example. There may be additional systems, devices, and/or networks; fewer systems, devices, and/or networks; different systems, devices, and/or networks; and/or differently arranged systems, devices, and/or networks than those shown in FIG. 1. Furthermore, two or more systems or devices shown in FIG. 1 may be implemented within a single system or device, or a single system or device shown in FIG. 1 may be implemented as multiple, distributed systems or devices. Additionally, or alternatively, a set of systems (e.g., one or more systems) or a set of devices (e.g., one or more devices) of the environment 100 may perform one or more functions described as being performed by another set of systems or another set of devices of the environment 100.
Referring now to FIG. 2, a simplified block diagram of a server system 200 is shown, in accordance with an embodiment of the present disclosure. The server system 200 is similar to the server system 108. In some embodiments, the server system 200 is embodied as a cloud-based and/or SaaS-based (software as a service) architecture. In one embodiment, the server system 200 is a part of the payment network 104 or integrated within the payment server 106. In another embodiment, the server system 200 is the acquirer server 102.
The server system 200 includes a computer system 202 and a database 204. The computer system 202 includes at least one processor 206 for executing instructions, a memory 208, a communication interface 210, and a storage interface 214 that communicate with each other via a bus 212.
In some embodiments, the database 204 is integrated within the computer system 202. For example, the computer system 202 may include one or more hard disk drives as the database 204. A storage interface 214 is any component capable of providing the processor 206 with access to the database 204. The storage interface 214 may include, for example, an Advanced Technology Attachment (ATA) adapter, a Serial ATA (SATA) adapter, a Small Computer System Interface (SCSI) adapter, a RAID controller, a SAN adapter, a network adapter, and/or any component providing the processor 206 with access to the database 204. In some example embodiments, the database 204 is configured to store one or more trained machine learning models corresponding to merchant names.
Examples of the processor 206 include, but are not limited to, an application-specific integrated circuit (ASIC) processor, a reduced instruction set computing (RISC) processor, a complex instruction set computing (CISC) processor, a field-programmable gate array (FPGA), and the like. The memory 208 includes suitable logic, circuitry, and/or interfaces to store a set of computer readable instructions for performing operations. Examples of the memory 208 include a random-access memory (RAM), a read-only memory (ROM), a removable storage drive, a hard disk drive (HDD), and the like. It will be apparent to a person skilled in the art that the scope of the disclosure is not limited to realizing the memory 208 in the server system 200, as described herein. In another embodiment, the memory 208 may be realized in the form of a database server or a cloud storage working in conjunction with the server system 200, without departing from the scope of the present disclosure.
The processor 206 is operatively coupled to the communication interface 210 such that the processor 206 is capable of communicating with a remote device 216 such as, the acquirer server 102 or the payment server 106, or communicating with any entity connected to the network 110 (as shown in FIG. 1). In one embodiment, the processor 206 is configured to receive payment transaction data from the acquirer server 102. The payment transaction data includes at least merchant name data field associated with a merchant.
It is noted that the server system 200 as illustrated and hereinafter described is merely illustrative of an apparatus that could benefit from embodiments of the present disclosure and, therefore, should not be taken to limit the scope of the present disclosure. It is noted that the server system 200 may include fewer or more components than those depicted in FIG. 2.
In one embodiment, the processor 206 includes a data pre-processing engine 218, an encoder 220, an attention layer 222, a decoder 224, and a classification engine 226. It should be noted that components, described herein, such as the data pre-processing engine 218, the encoder 220, the decoder 224, and the classification engine 226 can be configured in a variety of ways, including electronic circuitries, digital arithmetic and logic blocks, and memory systems in combination with software, firmware, and embedded technologies.
The data pre-processing engine 218 includes suitable logic and/or interfaces for extracting a merchant name from the merchant name data field included in the payment transaction data. In one embodiment, the processor 206 is configured to match the merchant name with aggregated merchant names stored in the database 114. If there does not exist a match, it means that the merchant name of the merchant 112 is fused with one or more noise characters and/or numbers that may add to computational complexity of matching. In an example, the merchant name (TIME*J2N418*Simag) in the merchant name data field includes noise and/or junk characters “*J2N418*Simag”. The data pre-processing engine 218 is configured to perform data pre-processing over the merchant name, such as, for example, removing numbers, special characters, punctuations, etc. These noise and/or junk characters carry no significant information and are usually filtered out to generate a cleansed merchant name (or “the merchant name” without the noise and/or junk characters). More specifically, the data pre-processing engine 218 includes a plurality of merchant names in a list of pre-cleaned merchant names and performs a mapping of the merchant name data field to at least one merchant name in the list of pre-cleaned merchant names. For instance, the data pre-processing engine 218 filters junk characters “*J2N418*Simag” from the merchant name data field “TIME*J2N418*Simag” and maps the merchant name data field to at least one merchant name (e.g., TIME) based on the pre-cleaned list of merchant names. In other words, characters which are not in the alphabet may be assigned to an “unknown” character, or converted to the closest character, e.g., accented characters may be converted to the corresponding un-accented ones. However, it shall be noted that data pre-processing is optional and embodiments of the present disclosure can be practiced on the merchant name data field along with the noise and/or junk characters.
The data pre-processing engine 218 is configured to segment the merchant name into a plurality of characters to obtain at least one character. In one example, to segment the merchant name, each character of the merchant name is separated using a defined set of delimiters (e.g., spaces, commas, semicolons, etc.). For instance, the merchant name “TIME” is segregated into a plurality of characters (e.g., T, I, M, E).
It should be noted that the merchant names are mostly “fanciful words” which implies that they are new words, words with special characters, out-of-vocabulary words, common words spelt in an uncommon way. Examples of the characters in the merchant name data field may include, but not limited to, alphabets, numbers, special characters, mathematical operators, symbols, emoticons, white spaces, images, signatures and the like.
The data pre-processing engine 218 is configured to convert the plurality of characters of the merchant name data field into a plurality of embedding vectors. Each character is represented in a vector form. In other words, the data pre-processing engine 218 generates character-level embedding vectors corresponding to the plurality of characters (e.g., ‘T’, ‘I’, ‘M’, ‘E’) of the merchant name using character-to-vector embedding algorithms. The character-to-vector embedding (i.e., “character-level embedding algorithm”) algorithm generates embedding vectors for each character of the merchant name that may be an optionally pre-processed text sequence (i.e., “merchant name”). In an example scenario, assuming, the merchant name is limited to 26 alphabets in the English language, the data pre-processing engine 218 generates a 52-dimensional embedding vector (includes both uppercase and lower case alphabets) for each character included in the merchant name data field.
In one example, suppose the merchant name data fields consist of a sequence of words {w1, w2}. A function is defined which takes input as a word w1 and returns one-hot vector representation of each character in the word w1. This is a binary vector of length |C| (the size of the character alphabet), having a 1 at the index Cj of the character and where all other entries are 0.
The encoder 220 includes suitable logic and/or interfaces for encoding the character-level embedding vectors associated with the plurality of characters for generating a set of source hidden vectors. In one embodiment, the encoder 220 implements a neural machine translation (NMT) model. In particular, the encoder 220 generates a source hidden vector associated with a character by applying the NMT model over a corresponding embedding vector. In other words, the encoder 220 is operable to read the embedding vectors corresponding to the plurality of characters, one character at a time, to produce a source hidden vector associated with each character.
In general, the NMT model includes an encoder 220 and a decoder 224. In one non-limiting example, the NMT model may be a context-aware neural network language translation model with attention mechanism. In one embodiment, the NMT model corresponds to a recurrent neural network (RNN) based encoder-decoder architecture. In one embodiment, the NMT model is a pre-trained language translation model. More specifically, the NMT model may be based on Long Short-Term Memory (LSTM) model (e.g., a six-layer deep Long Short-Term Memory model). In an illustrative manner, the encoder 220 includes a plurality of sequential encoder blocks that are types of Long Short Term Memory (LSTM) based sequential encoders (see, 402 in FIG. 4).
During the training process, the encoder 220 and the decoder 224 are trained using merchant names (which may be out-of vocabulary (OOV) words), i.e., by automatically learning from a large amount of known merchant names or a predefined word dictionary through neural network, which allows the encoder 220 and the decoder 224 to learn some information in a recognized text and a correct merchant name corresponding to the known merchant name data, such that the final aggregate merchant name at the decoder 224 can be obtained correctly. The encoder 220 and the decoder 224 are trained using conventional neural network training techniques (e.g., back-propagation training technique). In one embodiment, the encoder 220 and the decoder 224 utilize a cross-entropy loss function for reducing an output deviation from a known data by adjusting various weights associated with the encoder 220 and the decoder 224.
It shall be noted that the encoder 220 includes any number of encoder blocks based on length of the merchant name data field/merchant name. More specifically, the number of encoder blocks in the encoder 220 depends on a number of characters in the merchant name. The terms ‘LSTM encoders’ and ‘LSTM encoder blocks’ have been used interchangeably throughout the description and correspond to an encoder unit in the encoder 220.
In one non-limiting example, suppose, the merchant name data field contains a text (e.g., “ZBX!AY*2R”), and the encoder 220 is fed with an embedding vector corresponding to character ‘Z’ at a first encoding timestamp, an embedding vector corresponding to character ‘A’ at a second encoding timestamp, and so on. More specifically, the encoder 220 learns a representation of the merchant name during a particular time interval and generates a source hidden vector "Hk" associated with a character (e.g., “Z”). The source hidden vector is passed to a next encoder block during a subsequent encoding timestamp (e.g., from LSTM encoder 402a to LSTM encoder 402b in FIG. 4), to initialize the next/subsequent LSTM encoder’s state. The source hidden vectors associated with encoder blocks of the encoder 220 at every encoding timestamp are collectively referred to as a set of source hidden vectors. For instance, if the merchant name (“ZBX!AY”) has 6 characters, the encoder 220 has 6 source hidden vectors (e.g., H1, H2, H3, H4, H5, H6 (also referred to as ‘set of source hidden vectors’)) corresponding to the characters in the merchant name that are derived from the encoder blocks in the encoder 220. Moreover, the encoder 220 generates an encoder output (i.e., “a dense vector”) at the last encoding timestamp. The encoder output is passed to the decoder 224.
The decoder 224 includes suitable logic and/or interfaces for predicting a current character of the aggregated merchant name by iteratively performing particular steps. The decoder 224 continues to perform the particular steps iteratively until an end token is determined by the decoder 224. The decoder 224 is configured to generate a target hidden vector after receiving the encoder output (i.e., “dense vector”) from the encoder 220. In one embodiment, the decoder 224 generates a current target hidden vector utilizing the previous target hidden vector associated with a preceding character and previously predicted character.
In an embodiment, the decoder 224 includes a plurality of sequential decoder blocks that is a type of the LSTM based sequential decoders (see, 404 in FIG. 4). The LSTM based sequential decoders in the decoder 224 are a type of a deep neural network that sequentially predicts characters of the aggregated merchant name using a Long Short-Term Memory (LSTM) machine learning model.
The attention layer 222 includes suitable logic and/or interfaces for determining a context-aware encoded vector based at least on the set of source hidden vectors and the target hidden vector utilizing an attention mechanism.
Specifically, the attention mechanism has a great promoting effect on sequence learning tasks, data weighted conversion can be performed on encoded data (i.e., “source hidden vector”) by introducing the attention mechanism to the encoder 220, and/or weighted conversion can be performed on decoded data by introducing the attention mechanism to the encoder 220, thereby effectively improving system performance of sequence versus sequence in a natural mode. Therefore, the accuracy of merchant name recognition can be further improved by introducing the attention mechanism into this embodiment.
The attention layer 222 generates an attention weight associated with each character based on the source hidden vector associated with each character and a current target hidden vector. Thus, at each time step t, the attention layer 222 generates a variable-length attention vector (at) (including attention weights of all the source hidden vectors) based on the current target hidden vector and the set of source hidden vectors.
The attention weights are determined at each decoding time stamp based on the set of source hidden vectors (e.g., H1, H2, H3, H4, H5, H6) and the current target hidden vector associated with the current character. During a first timestamp, a target hidden vector T1 is initialized using the dense vector received from the encoder 220. Subsequently, at a second timestamp, a target hidden vector T2 is generated using the target hidden vector T1 and a character predicted at the first timestamp.
In an example embodiment, an attention weight for a character at time step t is calculated based on a preceding character predicted at time step t-1, a target hidden vector associated with the current character and source hidden vectors of the characters at the time step t. In one embodiment, the attention weights are learned using a single layer neural network that learns a function to associate the source hidden vector of a character and the target hidden vector of the current character at the decoding timestamp. Thereafter, the attention weights are passed through a softmax layer to normalize the attention weights.
Then, the attention layer 222 generates the variable-length attention vector (at) based on the normalized attention weights.
Thereafter, the attention layer 222 generates a context vector (Ct) associated with the current character based on the variable-length attention vector (at) and the set of source hidden vectors associated with the plurality of characters. In one example, the context vector (Ct) is calculated based on the global attention model, where attention is placed on all source positions and takes into consideration all source hidden vectors to derive the context vector (Ct).
In another example, the context vector (Ct) for the current character is computed by multiplying source hidden vectors of preceding characters of the current character and the attention weights of the preceding characters.
To make this context vector (Ct) merchant-specific, the processor 206 is configured to generate a merchant contextual vector based on merchant-specific information such as, but not limited to, merchant category code (MCC), industry code, an average transaction size of the merchant, a merchant tax identifier, a merchant Uniform Resource Locator (URL), etc. The merchant contextual vector is utilized for distinguishing similar merchant from another merchant associated with a similar aggregated merchant name. In other words, the merchant contextual vector is utilized for differentiating similar merchant names.
For example, the merchant name ‘SUBWAY’ may refer to a restaurant and the merchant name ‘Subway’ may refer to a ticket vending machine for a local bus ride. It is necessary for the decoder 224 to understand this difference while predicting the aggregated merchant name. Moreover, the prediction of the aggregated merchant name aids in drawing a line of demarcation for classifying the payment transaction data. In general, the merchant contextual vector helps the server system 200, more specifically, the classification engine 226 in distinguishing a similar merchant and classifying merchant locations based on the aggregated merchant name.
For generating the merchant contextual vector, the processor 206 is configured to generate n-dimensional vector representations for each merchant-specific information. For example, suppose, a merchant “XYZ” runs a grocery store and then a vector position corresponding to MCC indicating grocery is set to ‘1’ and remaining vector components are set to ‘0’. Similarly, the processor 206 generates a vector representation of each merchant-specific information. These vector representations of each merchant-specific information are concatenated to generate a multi-dimensional merchant contextual vector. An example of vector representations of the merchant-specific information is shown below:
MCC=[¦(0@0@1@0@0@0@0@0@0@0)]Tax ID=[¦(0@0@0@0@1@0@0@0@0@0)] Industry Code=[¦(0@0@0@0@0@0@1@0@0@0)] Avg Ticket size=[¦(0@0@0@0@0@0@0@0@1@0)] URL=[¦(1@0@0@0@0@0@0@0@0@0)]

Merchant contextual vector (M_n)=[¦(¦(0@0@1@0@0@0@0@0@0@0)&¦(0@0@0@0@1@0@0@0@0@0)&¦(¦(0@0@0@0@0@0@1@0@0@0)&¦(0@0@0@0@0@0@0@0@1@0)&¦(1@0@0@0@0@0@0@0@0@0)))]
It must be apparent that the vector representations of the merchant-specific information and the merchant contextual vector are for example purposes only and the merchant-specific information or the merchant contextual vector may be represented as a higher dimensional vector or a lower dimension vector with fewer or more number of rows and/or columns.
Thereafter, the processor 206 is configured to concatenate the context vector and the merchant contextual vector for generating a context-aware encoded vector. Then, an attention mechanism hidden state vector {tilde over (ht)} is calculated based on the context-aware encoded vector and the current target hidden vector. The processor 206 is configured to predict the current character based on the attention mechanism hidden state vector {tilde over (ht)}. In addition, for further improving the accuracy of predicting the merchant name, the attention mechanism hidden state vector can be input as a feedback into the decoder 224.
Specifically, the attention mechanism hidden state vector {tilde over (ht)} at the time t is used as the input for the decoder 224 to calculate an attention mechanism hidden state vector for the next time. In other words, the decoder 224 provides the attention mechanism hidden state vector associated with the current character determined as feedback for predicting the next character associated with the aggregated merchant name.
In one embodiment, the processor 206 is configured to feed the attention mechanism hidden state vector to a softmax layer to generate a probability distribution of characters. The probability distribution indicates a selection probability value of a character being selected as the current character for the aggregated merchant name. The processor 206 is configured to determine a character with a selection probability value greater than a predetermined threshold value and select the character as the current character of the aggregate merchant name.
The classification engine 226 includes suitable logic and/or interfaces for classifying all merchant locations based on the aggregated merchant name. In particular, the classification engine 226 is configured to classify/aggregate merchant locations associated with the same aggregated merchant name into a group. In general, “Classification” refers to a predictive modeling problem where a class label is predicted for a given example of the input sequence.
The classification engine 226 reads the aggregated merchant name determined by the decoder 224 for classifying each payment transaction data associated with different merchant locations of the merchant into a group. In one non-limiting example, three merchants are located at different merchant locations L1, L2, L3 but are associated with an aggregated merchant name “SUBWAY®”. The server system 200 extracts merchant name information from the payment transaction data and provides an aggregated merchant name corresponding to all three merchants. In one example embodiment, the classification engine 226 is configured to generate and store a merchant aggregation table into the database 114 based on the classification. An example of the merchant aggregation table is shown and explained with reference to FIG. 5B.
FIGS. 3A and 3B, collectively, represent a schematic block diagram representation 300 of a merchant aggregation system for determining an aggregated merchant name from merchant name data field present in the payment transaction data, in accordance with an embodiment of the present disclosure.
At first, the processor 206 is configured to extract merchant name (see 320), in raw text form, from the merchant name data field 302 associated with a payment transaction request received from the acquirer server 102. A data pre-processing unit 304 is configured to perform data pre-processing over the received merchant name and filter the merchant name to remove noise and/or junk characters from the merchant name (see 306). However, this process may not eliminate noise and there may still be errors in the merchant name. During the segmentation (see, 308), the data pre-processing unit 304 is configured to parse/segment the merchant name (see, 322) into a plurality of characters (see, 324). In one example, to segment the merchant name, each character in the merchant name is separated into individual characters using a delimiter (e.g., spaces, semicolons, etc.). Each character is converted into an embedding vector (see, Table 326) using a character-level embedding algorithm (see, 310). In other words, the processor 206 is configured to perform the embedding process of each character to generate a vector representation of a corresponding character.
As shown in FIG. 3B, the embedding vectors are fed into a neural machine translation (NMT) model 312. The NMT model 312 includes LSTM encoder blocks 314, and LSTM decoder blocks 316. The LSTM encoder blocks 314 are configured to generate source hidden vectors (see, 328) associated with the characters of the merchant name. The last LSTM encoder block provides a dense vector to the LSTM decoder blocks 316 to initialize a first LSTM decoder block for generating a target hidden vector associated with a first character which is to be predicted. An attention layer (not shown in figures) in the NMT model 312 is configured to determine attention weights using a preceding character of the current character, the source hidden vectors and the target hidden vector of the current character. Thereafter, the attention layer creates a context vector using the source hidden vectors and corresponding attention weights and concatenates a merchant contextual vector including merchant-specific information with the context vector for generating a context-aware encoded vector.
At the decoding side, the processor 206 is configured to combine the target hidden vector and the context-aware encoded vector for generating an attention mechanism hidden state vector which is provided as feedback into a next LSTM decoder block to improve decoding performance. The processor 206 is configured to feed the attention mechanism hidden state vector to a softmax layer to provide probability distribution of characters. A character with a probability value greater than a predetermined threshold value is selected as the current character of the aggregate merchant name. In similar manner, the LSTM decoder blocks 316 predict at least one character at each time step for generating the aggregate merchant name (see, 330). The decoding process is continued until an end token is identified by an LSTM decoder block.
Thereafter, the processor 206 is configured to apply a classification (see 318) for generating a merchant aggregation table (see, 332). More specifically, all merchant locations associated with the same aggregated merchant are grouped in the merchant aggregation table, which is stored at the database for data analytics.
Referring now to FIG. 4, a simplified block diagram of a Neural Machine Translation (NMT) architecture 400 with attention mechanism is illustrated, in accordance with one embodiment of the present disclosure. The NMT architecture 400 is an example of RNN based encoder-decoder architecture.
As shown in FIG. 4, the NMT architecture 400 includes an LSTM encoder 402, an LSTM decoder 404, and the attention layer 406. The encoder 220 (as shown in FIG. 2) is a type of LSTM encoder 402. In general, the LSTM encoder 402 is a type of neural network that models the sequence of characters using a Long Short-Term Memory (LSTM). Such modeling ability of character sequence allows for the automatic learning of different merchant names in the payment transaction data. Although only three LSTM encoder blocks (402a, 402b, 402c) are shown in FIG. 4, it will be appreciated that any number of LSTM blocks may be used depending on a sequence length of the merchant name/merchant name data field provided as an input sequence.
More particularly, a series of the LSTM encoders 402 is fed sequentially with one-dimensional embedding vectors representing a plurality of characters (e.g., ‘XBZ’) included in the merchant name data field. The vector representation of each character of the plurality of characters is provided by embedding a corresponding character (shown by embedding blocks 401a, 401b, 401c).
In one embodiment, each LSTM encoder block (e.g., 402a, 402b and 402c) learns a representation of a character and maintains a source hidden vector [H]. More specifically, at an encoding timestamp t =1, the LSTM encoder 402a takes an embedding vector associated with the character ‘X’ as an input and generates a source hidden vector H1, at an encoding timestamp t = 2, the LSTM encoder 402b takes the embedding vector associated with the character ‘B’ as an input and generates a source hidden vector H2, and so on.
The LSTM decoder 404 is a type of neural network that predicts a sequence of characters of the aggregated merchant name using a Long Short-Term Memory (LSTM). Such prediction ability of character sequence allows the LSTM decoder 404 to predict an aggregate merchant name from the merchant name in the payment transaction data. Although only three LSTM decoders (404a, 404b, 404c) are shown in FIG. 4, it will be appreciated that any number of LSTM blocks may be used depending on a sequence length of the aggregated merchant name.
At first, the LSTM decoder block 404a receives a dense vector which includes all source side information from the last LSTM encoder block 402c which initiates the LSTM decoder block 404a and a start token is taken as an input. The LSTM decoder block 404a predicts a first character based on the dense vector received from the last LSTM encoder block 402c. Based on the predicted character (i.e., “Z”), the LSTM decoder block 404b generates a target hidden vector H2d. To learn relevant encoder side information, the NMT architecture includes the attention layer 406. In general, the attention layer 406 improves the performance of the NMT model by selectively focusing on sub-parts of the encoded merchant name during determining the aggregated merchant name. The attention layer 406 generates attention weights corresponding to the set of source hidden vectors based on a preceding character of the current character (or character predicted at timestamp t-1), the source hidden vectors H1, H2, H3, and the target hidden vector H2d. Based on the attention weights, the attention layer 406 generates an attention vector (at).
Thereafter, the attention layer 406 calculates a context vector (Z2e) based at least on the attention vector (at) and the set of source hidden vectors. In one embodiment, the attention layer 406 follows the global attention mechanism, where attention is applied to every source hidden vectors. In another embodiment, the attention layer 406 follows the local attention mechanism, where the attention is applied to a few source hidden vectors. In one embodiment, the attention layer 406 is configured to perform multiplication of the attention vector with source hidden vectors of the previously predicted characters for creating the context vector (Z2e).
Further, the attention layer 406 is configured to concatenate the context vector (Z2e) and a merchant contextual vector 410 for generating a context-aware encoded vector (i.e., [Z2e, MCC, industry code, transaction size]). The merchant contextual vector 410 includes, but is not limited to, merchant-specific information and is used to distinguish a merchant from a similar merchant. For example, the merchant name “DOMINOS” may refer to a pizza restaurant or a play area for kids. The merchant-specific information for the pizza restaurant is different from the merchant-specific information for the play area, thereby enabling the LSTM decoders 404 to demarcate and distinguish a similar merchant based on merchant contextual information. An example of the merchant contextual vector is shown and explained with reference to FIG. 2.
The LSTM decoder 404 combines the context-aware encoded vector and the target hidden vector H2d for generating an attention mechanism hidden state vector {tilde over (H2d)}. The LSTM decoder 404 is configured to predict the current character based on the attention mechanism hidden state vector {tilde over (H2d)}. In addition, for further improving the accuracy of predicting the merchant name, the attention mechanism hidden state vector can be input as feedback into the next LSTM decoder 404.
At the next timestamp, the LSTM decoder block 404c will use the attention mechanism hidden state vector {tilde over (H2d)} of the predicted character (e.g., “A”) for generating the next target hidden vector H3d. The attention layer 406 again determines attention weights based on the preceding character (e.g., “A”), the set of source hidden vectors and the target hidden vector H3d. Thereafter, similar operations are repeated until (i.e., “end of symbol”) is received at the LSTM decoder 404.
In one non-limiting example, in order to configure the LSTM encoders 402 and the LSTM decoders 404, machine learning training techniques (e.g., using Stochastic Gradient Descent, back propagation, etc.) are used. Thus, accuracy of the NMT based encoder-decoder architecture 400 can be determined by comparing a predicted aggregated merchant name with an actual aggregated merchant name based on the merchant name (or input sentence) to compute a loss. While there are several varieties of loss functions, a very common one to utilize is the Log loss or Cross-Entropy Loss. The equation of this loss function is provided below:
Cross-Entropy Loss:

-?_(c=1)^(|s|)¦?_(e=1)^(|V|)¦?y_(c,e) log?(y ^_(c,e))?

Where,
|S|=Length of the merchant name
|V|=Length of the total number of characters
y_(c,e)=1 when the character entry is the correct character
y_(c,e)=0 when the character entry is not a correct character
y ^_(c,e)=Predicted probability of character entry on character c

Based on the loss function, the LSTM encoders 402 and the LSTM decoders 404 are configured to adjust weights of the LSTM blocks.
Referring now to FIG. 5A, an example representation of table 500 depicting merchant name and location information of a plurality of merchants extracted from one or more payment transaction data, is illustrated, in accordance with one example embodiment of the present disclosure. As mentioned previously, the server system 108 receives payment transition data (i.e., payment transaction request). The payment transaction data includes at least a data element which stores merchant location and name information. The server system 108 extracts the merchant locations from the payment transaction data. The table 500 includes a plurality of data field columns such as, but not limited to, merchant name 502, address 504, city field 506, and state field 508. The merchant name data field column 502 corresponds to the merchant name provided by agent/merchant while configuring a merchant terminal (e.g., a POS terminal). Sometimes, the merchant name in the merchant name data field 502 includes noise and/or junk characters. These noises and/or junk characters may refer to alphabets, numbers, or special characters that distort the merchant name.
As an example, a first row depicts a merchant name “Red Lob*5623*st” located at “3360 Camp Creek Parkway, Atlanta, GA” in the payment transaction data. In the above example, “*5623* is noise/junk character. Therefore, to find out correct aggregate merchant name for such merchant names, the server system (e.g., the server system 200) utilizes a neural machine language translation model configured to predict an aggregated merchant name based on the merchant name “Red Lob*5623*st” in the merchant name data field column 502. The table 500 includes as many entries as the number of payment transaction data received from the plurality of merchants associated with the acquirer.
FIG. 5B, in conjunction with FIG. 5A, shows an example representation of a merchant aggregation table 520 depicting grouping of all merchant locations with their respective aggregated merchant name, in accordance with one embodiment of the present disclosure.
In column 522, the merchant aggregation table 520 includes merchant aggregated names corresponding to a plurality of merchants. The server system 200 is configured to group payment transaction data associated with a single aggregated merchant name together.
As shown in the FIG. 5B, the merchant aggregation table 520 categorically organizes payment transaction data of one or more merchant locations associated with the same aggregated merchant name together. For example, the payment transaction data from merchant locations (Atlanta and Jefferson City) associated with an aggregated merchant name “Red Lobster” (shown under an aggregated merchant name field 522) are aggregated in the merchant aggregation table.
It shall be apparent that the table 520 may include additional aggregated merchant names and/or merchant names based on payment transaction data. Moreover, it shall be noted that the table 520 shown in FIG. 5B is exemplary and only provided for the purposes of explanation. In practical, the table 520 may include multiple such tables and each table may have more or less columns (depending on payment transaction information) and rows (depending on payment transaction data or aggregated merchant name) than those depicted in FIG. 5B.
FIG. 6A and 6B, collectively, represent a flow chart 600 of a process flow of determining an aggregated merchant name associated with payment transaction data, in accordance with an example embodiment. The sequence of operations of the flow chart 600 may not be necessarily executed in the same order as they are presented. Further, one or more operations may be grouped together and performed in form of a single step, or one operation may have several sub-steps that may be performed in parallel or in sequential manner.
At 602, the server system 200 receives payment transaction data from the acquirer 102. The payment transaction data includes, but is not limited to, a merchant name in a merchant name data field among other payment-related information. For instance, the merchant name data field includes “XBZ!AY*2R” as a merchant name that is received from a POS terminal present at the merchant facility.
At 604, the server system 200 processes the merchant name from the merchant name data field. The merchant name in the merchant name data field includes noise/junk characters that are not related to the merchant name. In the above example, the characters “*2R” in the merchant name data field are the noise/junk characters. These junk characters increase the computational complexity and hence it is desirable to pre-process and filter out the noise/junk characters. However, it shall be noted that data pre-processing of the merchant name is optional and the server system 200 can operate as such on the merchant name with the noise/junk characters in the merchant name.
At 606, the server system 200 segments the merchant name into a plurality of characters. The plurality of characters may include alphabets, numbers, special characters, mathematical operators, emoticons, images, icons, or any combination thereof. For example, the merchant name “XBZ!AY” is separated by delimiters (e.g., white space) as “ X B Z ! A Y”.
At operation 608, the server system is configured to generate an embedding vector for each character. The embedding vector is a numerical representation of a corresponding character for processing the merchant name. In an example, if the merchant name is limited to include only alphabets, more specifically, upper case letters, each character of the merchant name can be represented as a 26-dimensional binary vector in which the vector position corresponding to a character (e.g., ‘X’) is ‘1’ (xth position in the 26-dimesnional vector) and the all other entries are ‘0’.
At 610, the server system 200 applies an NMT model over embedding vectors for encoding the plurality of characters. More specifically, the server system 200 sequentially learns a representation of each character and a sequence pattern of the characters (e.g., a preceding character or a succeeding character) and generates a corresponding source hidden vector based on the representation learning. In the above example, the server system 200 generates a source hidden vector HX on learning the character ‘X’ and a source hidden vector HB on learning a representation of the character ‘B’. Similarly, the server system generates a set of source hidden vectors (e.g, HX, HB, HZ, H!, HA, HY) after learning the merchant name “XBZ!AY”.
At 612, the server system 200 is configured to predict the aggregated merchant name for the merchant name by performing operations 614-626.
At 614, the server system 200 determines the attention weights of the plurality of characters. The attention weight associated with each character is determined based on a preceding character predicted at previous time step, the source hidden vectors associated with the plurality of characters and a target hidden vector of the current character. In one embodiment, the attention weights are learned using a single layer neural network that learns a function to associate the source hidden vector of a character and the target hidden vector of the current character at a timestamp. In an example embodiment, the server system 200 passes the attention weights through a softmax layer to normalize the attention weights.
At 616, the server system 200 determines a context vector based on the attention weights. In an embodiment, the context vector Ct is determined based on a variable-length attention vector and the set of source hidden vectors associated with the plurality of characters. The variable-length attention vector (at) is determined based on normalized attention weights that are obtained by passing the attention weights through a softmax layer.
At operation 618, the server system 200 combines the context vector with a merchant contextual vector to generate a context-aware encoded vector. In other words, the server system 200 aggregates the context vector and the merchant contextual vector during each decoding timestamp to generate the context-aware encoded vector. The merchant contextual vector provides merchant-specific information in a vector form, for example, industry code, merchant category code (MCC), merchant Uniform Resource Locator (URL), merchant ticket size, merchant tax identifier, etc., The merchant contextual vector aids in distinguishing a merchant with the similar name operating in different domains based on the merchant-specific information. For example, the merchant name “CROSSWORD” may refer to a book store or a cloth brand store. The merchant-specific information in the context-aware encoded vector helps to distinguish payment transaction data associated with the book store from the cloth brand store.
At 620, the server system 200 determines an attention mechanism hidden state vector {tilde over (H2d)} based at least on the context-aware encoded vector and the target hidden vector of a current character which is to be predicted.
At 622, the server system 200 predicts the current character based on the attention mechanism hidden state vector. At 624, the server system 200 checks whether all characters of the aggregated merchant name have been predicted.
At 626, if all the characters of the aggregated merchant name have not been predicted, the server system 200 provides the attention mechanism hidden state vector as feedback into the decoder (see, 404 of FIG. 4) for predicting subsequent/next characters of the aggregated merchant name.
Otherwise, at 628, the server system outputs the aggregated merchant name (e.g., “ZAXBY”) of the merchant and stops the decoding process.
At 630, the server system 200 generates a merchant aggregation table by grouping all merchant locations with their respective aggregated merchant names. An example of the merchant aggregation table is shown and explained with reference to FIG. 5B.
FIGS. 7A and 7B, collectively, represent a flow diagram of a method 700 for generating an aggregated merchant name, in accordance with an example embodiment. The method 700 depicted in the flow diagram may be executed by, the at least one server, for example, the server system 108 or the server system 200 explained with reference to FIG. 2, the payment server 106, or the acquirer server 102. Operations of the flow diagram of method 700, and combinations of operation in the flow diagram, may be implemented by, for example, hardware, firmware, a processor, circuitry and/or a different device associated with the execution of software that includes one or more computer program instructions. It is noted that the operations of the method 700 can be described and/or practiced by using a system other than these server systems. The method 700 starts at operation 702.
At operation 702, the method 700 includes receiving, by the server system 108, payment transaction data from the acquirer server 102. The payment transaction data includes, but is not limited to, a merchant name data field associated with a merchant. The merchant name data field includes a merchant name associated with a merchant. In some examples, the merchant name may not be accurate and mostly includes noise/junk characters. In one embodiment, the method includes performing data pre-processing over a merchant name for filtering one or more noise characters and segmenting the merchant name into a plurality of characters to obtain at least one character.
At operation 704, the method 700 includes converting, by the server system 108, the plurality of characters of the merchant name data field into a plurality of embedding vectors. Each character of the plurality of characters is associated with an embedding vector of the plurality of embedding vectors.
At operation 706, the method 700 includes generating, by the server system 108, a set of source hidden vectors corresponding to the plurality of characters. A source hidden vector associated with a character is generated by applying a neural machine translation (NMT) model using an encoder (see, 402 in FIG. 4) over a corresponding embedding vector.
In one example, the aggregated merchant name may include n characters (i.e., Merchant Name={C1, C2….Cn}).
At operation 708, the method 700 includes predicting, by the server system 108, the aggregated merchant name character by character using a decoder. A current character associated with the aggregated merchant name (see 404 in FIG. 4) is predicted by performing operations 708a-708f, iteratively. According to the above example, at first, the server system 108 is configured to predict the current character “C1” and then iteratively predict the next characters (C2, C3......Cn) by performing similar operations as followed for predicting the current character “C1”.
At operation 708a, the method 700 includes determining, by the server system 108, attention weights of the plurality of characters based, at least in part, on the set of source hidden vectors and a current target hidden vector associated with the current character.
At operation 708b, the method 700 includes determining, by the server system 108, a context vector based, at least in part, on the attention weights associated with the plurality of characters.
At 708c, the method 700 includes concatenating, by the server system, the context vector with a merchant contextual vector for obtaining a context-aware encoded vector. In one embodiment, the method includes generating the merchant contextual vector based, at least in part, on merchant-specific information included in the payment transaction data. The merchant contextual vector is utilized for differentiating alike merchant names. In one example, the merchant contextual vector enables the server system 108 to distinguish a merchant name from a similar merchant name operating within a different category of goods/services.
At operation 708d, the method 700 includes calculating, by the server system 108, an attention mechanism hidden state vector based, at least in part, on the context-aware encoded vector and the current target hidden vector.
At operation 708e, the method 700 includes determining, by the server system 108, the current character based, at least in part, on the attention mechanism hidden state vector. In one embodiment, the method includes generating, by the server system 108, a probability distribution of characters based at least on the attention mechanism hidden state vector associated with the current character. The probability distribution of characters indicates a selection probability value of a character being selected as the current character for the aggregated merchant name. A character having a selection probability value greater than a predetermined threshold value is selected as the current character for the aggregated merchant name.
At operation 708f, the method 700 includes providing, by the server system 108, the attention mechanism hidden state vector as a feedback into the decoder (see 404 in FIG. 4) for predicting a next character associated with the aggregated merchant name.
The sequence of operations of the method 700 need not be necessarily executed in the same order as they are presented. Further, one or more operations may be grouped together and performed in form of a single step, or one operation may have several sub-steps that may be performed in parallel or in sequential manner.
FIG. 8 is a simplified block diagram of a payment server 800, in accordance with an embodiment of the present disclosure. The payment server 800 is an example of the payment server 106 of FIG. 1. The payment network 104 may be used by the payment server 800, an acquirer server 102, and an issuer server as a payment interchange network. Examples of the payment network 104 may include, but not limited to, Mastercard® payment system interchange network. The payment server 800 includes a processing system 805 configured to extract programming instructions from a memory 810 to provide various features of the present disclosure. The components of the payment server 800 provided herein may not be exhaustive and the payment server 800 may include more or fewer components than those depicted in FIG. 8. Further, two or more components may be embodied in one single component, and/or one component may be configured using multiple sub-components to achieve the desired functionalities. Some components of the payment server 800 may be configured using hardware elements, software elements, firmware elements, and/or a combination thereof.
Via a communication interface 815, the processing system 805 receives payment transaction data (i.e., “payment transaction authorization request”) from a remote device 820 such as the acquirer server 102. The communication may be achieved through API calls, without loss of generality. The payment server 800 includes a database, such as a transaction database 825. The transaction database 825 may include, but not limited to, payment transaction data, such as Issuer ID, country code, acquirer ID, merchant name, merchant location, etc. In one embodiment, the transaction database 825 are stored based on merchant aggregation rules. Transaction data of merchants with the same aggregated merchant name are stored in a group. The payment server 800 may also perform similar operations as performed by the server system 108 or the server system 200 for determining an aggregated merchant name associated with payment transaction data. For the sake of brevity, the detailed explanation of the payment server 800 is omitted herein with reference to the FIGS. 1 and 2.
FIG. 9 is a simplified block diagram of an acquirer server 900, in accordance with one embodiment of the present disclosure. The acquirer server 900 is associated with an acquirer bank, which may be associated with one or more merchants (e.g., the merchants 112a-112c). The merchant may have established an account to accept payment for the purchase of goods from customers. The acquirer server 900 is an example of the acquirer server 102 of FIG. 1 or may be embodied in the acquirer server 102. Further, the acquirer server 900 is configured to facilitate transactions with an issuer server (not shown) for payment transactions using the payment network 104 of FIG. 1. The acquirer server 900 includes a processing module 905 communicably coupled to a merchant database 910 and a communication module 915. The communication module 915 is configured to receive payment transaction data associated with a payment transaction performed at a merchant terminal. This payment transaction data is stored in the merchant database and also sent to the payment server 800 via the payment network 104.
The components of the acquirer server 900 provided herein may not be exhaustive, and the acquirer server 900 may include more or fewer components than those depicted in FIG. 9. Further, two or more components may be embodied in one single component, and/or one component may be configured using multiple sub-components to achieve the desired functionalities. Some components of the acquirer server 900 may be configured using hardware elements, software elements, firmware elements, and/or a combination thereof.
Further, the merchant database 910 includes a table which stores one or more merchant parameters, such as, but not limited to, a merchant primary account number (PAN), a merchant name, a merchant ID (MID), a merchant category code (MCC), a merchant city, a merchant postal code, an MAID, a merchant brand name, industry code, merchant URL, merchant ticket size, terminal identification numbers (TIDs) associated with merchant terminals (e.g., the POS terminals or any other merchant electronic devices) used for processing transactions, among others. The processing module 905 is configured to use the MID or any other merchant parameter such as the merchant PAN to identify the merchant during the normal processing of payment transactions, adjustments, chargebacks, end-of-month fees, loyalty programs associated with the merchant and so forth. In one embodiment, the processing module 905 generates a merchant contextual vector based on the merchant parameters (also referred to as merchant-specific information). The processing module 905 may be configured to store and update the merchant parameters in the merchant database 910 for later retrieval. In an embodiment, the communication module 915 is capable of facilitating operative communication with a remote device 920 such as, a merchant terminal, a payment server.
The disclosed methods with reference to FIGS. 1 to 9, or one or more operations of the method 700 may be implemented using software including computer-executable instructions stored on one or more computer-readable media (e.g., non-transitory computer-readable media, such as one or more optical media discs, volatile memory components (e.g., DRAM or SRAM), or nonvolatile memory or storage components (e.g., hard drives or solid-state nonvolatile memory components, such as Flash memory components) and executed on a computer (e.g., any suitable computer, such as a laptop computer, net book, Web book, tablet computing device, smart phone, or other mobile computing device). Such software may be executed, for example, on a single local computer or in a network environment (e.g., via the Internet, a wide-area network, a local-area network, a remote web-based server, a client-server network (such as a cloud computing network), or other such network) using one or more network computers. Additionally, any of the intermediate or final data created and used during implementation of the disclosed methods or systems may also be stored on one or more computer-readable media (e.g., non-transitory computer-readable media) and are considered to be within the scope of the disclosed technology. Furthermore, any of the software-based embodiments may be uploaded, downloaded, or remotely accessed through a suitable communication means. Such suitable communication means include, for example, the Internet, the World Wide Web, an intranet, software applications, cable (including fiber optic cable), magnetic communications, electromagnetic communications (including RF, microwave, and infrared communications), electronic communications, or other such communication means.
Although the disclosure has been described with reference to specific exemplary embodiments, it is noted that various modifications and changes may be made to these embodiments without departing from the broad spirit and scope of the disclosure. For example, the various operations, blocks, etc. described herein may be enabled and operated using hardware circuitry (for example, complementary metal oxide semiconductor (CMOS) based logic circuitry), firmware, software and/or any combination of hardware, firmware, and/or software (for example, embodied in a machine-readable medium). For example, the apparatuses and methods may be embodied using transistors, logic gates, and electrical circuits (for example, application specific integrated circuit (ASIC) circuitry and/or in Digital Signal Processor (DSP) circuitry).
Particularly, the server system 200 and its various components such as the computer system and the database may be enabled using software and/or using transistors, logic gates, and electrical circuits (for example, integrated circuit circuitry such as ASIC circuitry). Various embodiments of the disclosure may include one or more computer programs stored or otherwise embodied on a computer-readable medium, wherein the computer programs are configured to cause a processor or computer to perform one or more operations. A computer-readable medium storing, embodying, or encoded with a computer program, or similar language, may be embodied as a tangible data storage device storing one or more software programs that are configured to cause a processor or computer to perform one or more operations. Such operations may be, for example, any of the steps or operations described herein. In some embodiments, the computer programs may be stored and provided to a computer using any type of non-transitory computer readable media. Non-transitory computer readable media include any type of tangible storage media. Examples of non-transitory computer readable media include magnetic storage media (such as floppy disks, magnetic tapes, hard disk drives, etc.), optical magnetic storage media (e.g., magneto-optical disks), CD-ROM (compact disc read only memory), CD-R (compact disc recordable), CD-R/W (compact disc rewritable), DVD (Digital Versatile Disc), BD (BLU-RAY® Disc), and semiconductor memories (such as mask ROM, PROM (programmable ROM), EPROM (erasable PROM), flash memory, RAM (random access memory), etc.). Additionally, a tangible data storage device may be embodied as one or more volatile memory devices, one or more non-volatile memory devices, and/or a combination of one or more volatile memory devices and non-volatile memory devices. In some embodiments, the computer programs may be provided to a computer using any type of transitory computer readable media. Examples of transitory computer readable media include electric signals, optical signals, and electromagnetic waves. Transitory computer readable media can provide the program to a computer via a wired communication line (e.g., electric wires, and optical fibers) or a wireless communication line.
Various embodiments of the invention, as discussed above, may be practiced with steps and/or operations in a different order, and/or with hardware elements in configurations, which are different than those which are disclosed. Therefore, although the invention has been described based upon these exemplary embodiments, it is noted that certain modifications, variations, and alternative constructions may be apparent and well within the spirit and scope of the invention.
Although various exemplary embodiments of the invention are described herein in a language specific to structural features and/or methodological acts, the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as exemplary forms of implementing the claims.

Documents

Application Documents

# Name Date
1 202041042228-STATEMENT OF UNDERTAKING (FORM 3) [29-09-2020(online)].pdf 2020-09-29
2 202041042228-POWER OF AUTHORITY [29-09-2020(online)].pdf 2020-09-29
3 202041042228-FORM 1 [29-09-2020(online)].pdf 2020-09-29
4 202041042228-FIGURE OF ABSTRACT [29-09-2020(online)].jpg 2020-09-29
5 202041042228-DRAWINGS [29-09-2020(online)].pdf 2020-09-29
6 202041042228-DECLARATION OF INVENTORSHIP (FORM 5) [29-09-2020(online)].pdf 2020-09-29
7 202041042228-COMPLETE SPECIFICATION [29-09-2020(online)].pdf 2020-09-29
8 202041042228-Abstract_29-09-2020.jpg 2020-09-29
9 202041042228-Correspondence-Power of Attorney-07-10-2020.pdf 2020-10-07
10 202041042228-Proof of Right [15-02-2021(online)].pdf 2021-02-15
11 202041042228-Correspondence, Assignment_19-02-2021.pdf 2021-02-19
12 202041042228-FORM 18 [18-09-2024(online)].pdf 2024-09-18
13 202041042228-FER.pdf 2025-11-06

Search Strategy

1 202041042228_SearchStrategyNew_E_SearchStrategyE_04-11-2025.pdf