Abstract: ABSTRACT NATURAL LANGUAGE PROCESSING (NLP) BASED SYSTEMS AND METHODS FOR RECOMMENDATION OF ITEMS Conventionally industries delt with diverse categories of products/items recommendation. This gave rise to a display taxonomy for the products. The art of matching products/items with certainty is critical to infer price gaps, which can significantly alter a competitive landscape. Manually comparing product features is time-consuming and error-prone, leading to inaccurate results. Present disclosure provides systems and methods that receive items pertaining to various entities (retail and competitor’s) which are pre-processed to obtain pre-processed dataset. Taxonomy codes are tagged to a subset of items amongst pre-processed dataset to obtain code tagged items having attributes. The attributes are converted to feature vectors, and models are built using code tagged items and feature vectors. Using the models, a third set of items is obtained, and features are extracted accordingly. NLP engines process taxonomy code, an associated taxonomy level, and a value of the features for recommending items and are categorized accordingly. [To be published with FIG. 2]
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See Section 10 and Rule 13)
Title of invention:
NATURAL LANGUAGE PROCESSING (NLP) BASED SYSTEMS AND METHODS FOR RECOMMENDATION OF ITEMS
Applicant
Tata Consultancy Services Limited
A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th floor,
Nariman point, Mumbai 400021,
Maharashtra, India
Preamble to the description:
The following specification particularly describes the invention and the manner in which it is to be performed.
2
TECHNICAL FIELD
[001]
The disclosure herein generally relates to natural language processing (NLP) techniques, and, more particularly, to natural language processing (NLP) based systems and methods for recommendation of items.
5
BACKGROUND
[002]
Various industries deal with diverse categories of products. For instance, the retail industry has diverse categories of products/items such as food, fashion, alcohol, dairy, pantries, electronics, health, beauty, home improvement, office supplies, footwear, furniture, and so on. These categories are further sub-10 divided into multiple sub-categories with many levels to drill down with finer nuances of products. This gives rise to a display taxonomy for the products on e-commerce websites. This taxonomy may be either shallow or deep, based on a scheme of things.
[003]
With the ever-increasing width and depth of assortment in the digital 15 era, it is essential to understand how a product is placed in terms of price, offers, discounts, and so on, in comparison to competitors. This intelligence is required on a real-time or near-real-time basis, to stay competitive and relevant to consumers. Hence, matching similar items from competitors’ vast gamut of products is quite challenging. 20
[004]
The complexity of product matching comes to the fore as there is no specified standard for the attributes used in product definition, hence the same varies with each competitor. The descriptions and images vary extensively, and language also differs if competitors are spread across geographies. The art of matching products with certainty is critical to infer price gaps, which can 25 significantly alter a retailer’s competitive landscape. Manually comparing product features is time-consuming and error-prone, leading to inaccurate results.
3
SUMMARY
[005]
Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems.
[006]
For example, in one aspect, there is provided a processor 5 implemented method for recommendation of items. The method comprises receiving, via one or more hardware processors, information comprising a first set of items pertaining to a first entity, and a second set of items pertaining to a second entity; pre-processing, via the one or more hardware processors, the information comprising the first set of items pertaining to the first entity and the second set of 10 items pertaining to the second entity to obtain a pre-processed dataset; obtaining, via the one or more hardware processors, a taxonomy code to at least a subset of items amongst the pre-processed dataset to obtain a set of code tagged items, wherein each code tagged item amongst the set of code tagged items is associated with one or more attributes; converting, by using a sentence encoder via the one or 15 more hardware processors, the one or more attributes comprised in the set of code tagged items into a feature vector, wherein the feature vector is associated with the first set of items and the second set of items; building, via the one or more hardware processors, a first model and a second model using the set of code tagged items and the feature vector; predicting, by using the first model and the second model via the 20 one or more hardware processors, (i) a first taxonomy level-based value, and (ii) the taxonomy code for each remaining item amongst the pre-processed dataset, respectively to obtain a third set of items; extracting, via the one or more hardware processors, one or more features from the subset of items, and the third set of items; processing, via the one or more hardware processors, the taxonomy code, an 25 associated taxonomy level, and a value associated with the one or more features in a plurality of natural language processing (NLP) engines to obtain a first set of recommended items; applying, via the one more hardware processors, one or more rules on the first set of recommended items to obtain a fourth set of items, wherein each rule is associated with at least one NLP engine amongst the plurality of NLP 30 engines; grouping, via the one more hardware processors, one or more items from
4
the fourth set of items into one or more categories; and recommending, via the one
or more hardware processors, at least a subset of items amongst the fourth set of items to obtain a second set of recommended items, wherein the second set of recommended items is based on a weightage associated to each of the plurality of NLP engines. 5
[007]
In an embodiment, the step of obtaining the taxonomy code is based on at least one of an associated item category and an associated item sub-category.
[008]
In an embodiment, the step of extracting the one or more features from the subset of items, and the third set of items comprises concatenating one or more attributes associated with the subset of items, and the third set of items; 10 obtaining a predefined attribute value for each taxonomy code of the subset of items, and the third set of items; performing a comparison of keywords between the subset of items, and the third set of items; and extracting the one or more features from the subset of items, and the third set of items based on the comparison and the predefined attribute value. 15
[009]
In an embodiment, the step of processing by a first NLP engine amongst the plurality of NLP engines comprises filtering the second set of items for each item comprised in the first set of items based on the taxonomy code; creating a feature summary for the first set of items and the second set of items based on the value of the one or more features; converting the feature summary into 20 the feature vector of the first set of items and the second set of items; computing a cosine similarity score for the first set of items and the second set of items based on the feature vector of the first set of items and the second set of items; and obtaining the first set of recommended items based on the cosine similarity score.
[010]
In an embodiment, wherein the step of processing by a second NLP 25 engine amongst the plurality of NLP engines comprises for each taxonomy code: traversing through the associated taxonomy level for determining a match between an item of the first set of items and an item of the second set of items to obtain a set of level-based items; concatenating one or more attributes of the set of level-based items to obtain a set of concatenated attributes; converting the set of concatenated 30 attributes into the feature vector of the first set of items and the second set of items;
5
computing a cosine distance score between the first set of items and the second set
of items based on the feature vector of the first set of items and the second set of items; computing a taxonomy based matching score based on the cosine distance score; and obtaining the first set of recommended items based on the taxonomy based matching score. 5
[011]
In an embodiment, the step of processing by a third NLP engine amongst the plurality of NLP engines comprises creating an index of the second set of items; identifying a semantic match for a query item associated with the first set of items in the index of the second set of items; computing a semantic matching score based on the semantic match; and obtaining the first set of recommended 10 items based on the semantic matching score.
[012]
In an embodiment, the step of processing by a fourth NLP engine amongst the plurality of NLP engines comprises performing a comparison of a name associated with each item amongst the first of items with each item amongst the second of items; computing a string matching score based on the comparison; 15 and obtaining the first set of recommended items based on the string matching score.
[013]
In an embodiment, the step of grouping, comprises grouping one or more items into a first category based on an item comprised in the first set of recommended items that is recommended by a first combination of NLP engines; 20 grouping one or more items into a second category based on an item comprised in the first set of recommended items that is recommended by a second combination of NLP engines; grouping one or more items into a third category based on an item comprised in the first set of recommended items that is recommended by a third combination of NLP engines; and grouping one or more items into a fourth category 25 based on an item comprised in the first set of recommended items that is recommended by a NLP engine.
[014]
In an embodiment, the weightage associated to each of the plurality of NLP engines is determined based on a match of an item comprised in the fourth set of items with an associated item amongst the second set of items. 30
6
[015]
In an embodiment, the method further comprises updating the weightage of each of the plurality of NLP engines based on a comparison of (i) one or more items amongst the second set of recommended items, and (ii) a fifth set of items; and sorting the second set of recommended items based on the updated weightage. 5
[016]
In another aspect, there is provided a processor implemented system for recommendation of items. The system comprises: a memory storing instructions; one or more communication interfaces; and one or more hardware processors coupled to the memory via the one or more communication interfaces, wherein the one or more hardware processors are configured by the instructions to 10 receive information comprising a first set of items pertaining to a first entity, and a second set of items pertaining to a second entity; pre-process the information comprising the first set of items pertaining to the first entity and the second set of items pertaining to the second entity to obtain a pre-processed dataset; obtain a taxonomy code to at least a subset of items amongst the pre-processed dataset to 15 obtain a set of code tagged items, wherein each code tagged item amongst the set of code tagged items is associated with one or more attributes; convert, by using a sentence encoder, the one or more attributes comprised in the set of code tagged items into a feature vector, wherein the feature vector is associated with the first set of items and the second set of items; building, via the one or more hardware 20 processors, a first model and a second model using the set of code tagged items and the feature vector; predicting, by using the first model and the second model, (i) a first taxonomy level-based value, and (ii) the taxonomy code for each remaining item amongst the pre-processed dataset, respectively to obtain a third set of items; extract, one or more features from the subset of items, and the third set of items; 25 process the taxonomy code, an associated taxonomy level, and a value associated with the one or more features in a plurality of natural language processing (NLP) engines to obtain a first set of recommended items; apply one or more rules on the first set of recommended items to obtain a fourth set of items, wherein each rule is associated with at least one NLP engine amongst the plurality of NLP engines; 30 group one or more items from the fourth set of items into one or more categories;
7
and recommend at least a subset of items amongst the fourth set of items to obtain
a second set of recommended items, wherein the second set of recommended items is based on a weightage associated to each of the plurality of NLP engines.
[017]
In an embodiment, the taxonomy code is obtained based on at least one of an associated item category and an associated item sub-category. 5
[018]
In an embodiment, the one or more features are extracted from the subset of items, and the third set of items by concatenating one or more attributes associated with the subset of items, and the third set of items; obtaining a predefined attribute value for each taxonomy code of the subset of items, and the third set of items; performing a comparison of keywords between the subset of items, and the 10 third set of items; and extracting the one or more features from the subset of items, and the third set of items based on the comparison and the predefined attribute value.
[019]
In an embodiment, a first NLP engine amongst the plurality of NLP engines processes the taxonomy code, the associated taxonomy level, and the value 15 associated with the one or more features by filtering the second set of items for each item comprised in the first set of items based on the taxonomy code; creating a feature summary for the first set of items and the second set of items based on the value of the one or more features; converting the feature summary into the feature vector of the first set of items and the second set of items; computing a cosine 20 similarity score for the first set of items and the second set of items based on the feature vector of the first set of items and the second set of items; and obtaining the first set of recommended items based on the cosine similarity score.
[020]
In an embodiment, a second NLP engine amongst the plurality of NLP engines processes the taxonomy code, the associated taxonomy level, and the 25 value associated with the one or more features by performing for each taxonomy code: traversing through the associated taxonomy level for determining a match between an item of the first set of items and an item of the second set of items to obtain a set of level-based items; concatenating one or more attributes of the set of level-based items to obtain a set of concatenated attributes; converting the set of 30 concatenated attributes into the feature vector of the first set of items and the second
8
set of items; computing a cosine distance score between the first set of items and
the second set of items based on the feature vector of the first set of items and the second set of items; computing a taxonomy based matching score based on the cosine distance score; and obtaining the first set of recommended items based on the taxonomy based matching score. 5
[021]
In an embodiment, a third NLP engine amongst the plurality of NLP engines processes the taxonomy code, the associated taxonomy level, and the value associated with the one or more features by creating an index of the second set of items; identifying a semantic match for a query item associated with the first set of items in the index of the second set of items; computing a semantic matching score 10 based on the semantic match; and obtaining the first set of recommended items based on the semantic matching score.
[022]
In an embodiment, a fourth NLP engine amongst the plurality of NLP engines processes the taxonomy code, the associated taxonomy level, and the value associated with the one or more features by performing a comparison of a 15 name associated with each item amongst the first of items with each item amongst the second of items; computing a string matching score based on the comparison; and obtaining the first set of recommended items based on the string matching score.
[023]
In an embodiment, the one or more categories are obtained by 20 grouping one or more items into a first category based on an item comprised in the first set of recommended items that is recommended by a first combination of NLP engines; grouping one or more items into a second category based on an item comprised in the first set of recommended items that is recommended by a second combination of NLP engines; grouping one or more items into a third category 25 based on an item comprised in the first set of recommended items that is recommended by a third combination of NLP engines; and grouping one or more items into a fourth category based on an item comprised in the first set of recommended items that is recommended by a NLP engine.
9
[024]
In an embodiment, the weightage associated to each of the plurality of NLP engines is determined based on a match of an item comprised in the fourth set of items with an associated item amongst the second set of items.
[025]
In an embodiment, the one or more hardware processors are further configured by the instructions to update the weightage of each of the plurality of 5 NLP engines based on a comparison of (i) one or more items amongst the second set of recommended items, and (ii) a fifth set of items; and sort the second set of recommended items based on the updated weightage.
[026]
In yet another aspect, there are provided one or more non-transitory machine-readable information storage mediums comprising one or more 10 instructions which when executed by one or more hardware processors cause recommendation of items by receiving information comprising a first set of items pertaining to a first entity, and a second set of items pertaining to a second entity; pre-processing the information comprising the first set of items pertaining to the first entity and the second set of items pertaining to the second entity to obtain a 15 pre-processed dataset; obtaining a taxonomy code to at least a subset of items amongst the pre-processed dataset to obtain a set of code tagged items, wherein each code tagged item amongst the set of code tagged items is associated with one or more attributes; converting, by using a sentence encoder, the one or more attributes comprised in the set of code tagged items into a feature vector, wherein 20 the feature vector is associated with the first set of items and the second set of items; building a first model and a second model using the set of code tagged items and the feature vector; predicting, by using the first model and the second model, (i) a first taxonomy level-based value, and (ii) the taxonomy code for each remaining item amongst the pre-processed dataset, respectively to obtain a third set of items; 25 extracting one or more features from the subset of items, and the third set of items; processing the taxonomy code, an associated taxonomy level, and a value associated with the one or more features in a plurality of natural language processing (NLP) engines to obtain a first set of recommended items; applying, via the one more hardware processors, one or more rules on the first set of 30 recommended items to obtain a fourth set of items, wherein each rule is associated
10
with at least one NLP engine amongst the plurality of NLP engines; grouping, via
the one more hardware processors, one or more items from the fourth set of items into one or more categories; and recommending at least a subset of items amongst the fourth set of items to obtain a second set of recommended items, wherein the second set of recommended items is based on a weightage associated to each of the 5 plurality of NLP engines.
[027]
In an embodiment, the step of obtaining the taxonomy code is based on at least one of an associated item category and an associated item sub-category.
[028]
In an embodiment, the step of extracting the one or more features from the subset of items, and the third set of items comprises concatenating one or 10 more attributes associated with the subset of items, and the third set of items; obtaining a predefined attribute value for each taxonomy code of the subset of items, and the third set of items; performing a comparison of keywords between the subset of items, and the third set of items; and extracting the one or more features from the subset of items, and the third set of items based on the comparison and the 15 predefined attribute value.
[029]
In an embodiment, the step of processing by a first NLP engine amongst the plurality of NLP engines comprises filtering the second set of items for each item comprised in the first set of items based on the taxonomy code; creating a feature summary for the first set of items and the second set of items 20 based on the value of the one or more features; converting the feature summary into the feature vector of the first set of items and the second set of items; computing a cosine similarity score for the first set of items and the second set of items based on the feature vector of the first set of items and the second set of items; and obtaining the first set of recommended items based on the cosine similarity score. 25
[030]
In an embodiment, the step of processing by a second NLP engine amongst the plurality of NLP engines comprises for each taxonomy code: traversing through the associated taxonomy level for determining a match between an item of the first set of items and an item of the second set of items to obtain a set of level-based items; concatenating one or more attributes of the set of level-based 30 items to obtain a set of concatenated attributes; converting the set of concatenated
11
attributes into the feature vector of the first set of items and the second set of items;
computing a cosine distance score between the first set of items and the second set of items based on the feature vector of the first set of items and the second set of items; computing a taxonomy based matching score based on the cosine distance score; and obtaining the first set of recommended items based on the taxonomy 5 based matching score.
[031]
In an embodiment, the step of processing by a third NLP engine amongst the plurality of NLP engines comprises creating an index of the second set of items; identifying a semantic match for a query item associated with the first set of items in the index of the second set of items; computing a semantic matching 10 score based on the semantic match; and obtaining the first set of recommended items based on the semantic matching score.
[032]
In an embodiment, the step of processing by a fourth NLP engine amongst the plurality of NLP engines comprises performing a comparison of a name associated with each item amongst the first of items with each item amongst 15 the second of items; computing a string matching score based on the comparison; and obtaining the first set of recommended items based on the string matching score.
[033]
In an embodiment, the step of grouping comprises grouping one or more items into a first category based on an item comprised in the first set of 20 recommended items that is recommended by a first combination of NLP engines; grouping one or more items into a second category based on an item comprised in the first set of recommended items that is recommended by a second combination of NLP engines; grouping one or more items into a third category based on an item comprised in the first set of recommended items that is recommended by a third 25 combination of NLP engines; and grouping one or more items into a fourth category based on an item comprised in the first set of recommended items that is recommended by a NLP engine.
[034]
In an embodiment, the weightage associated to each of the plurality of NLP engines is determined based on a match of an item comprised in the fourth 30 set of items with an associated item amongst the second set of items.
12
[035]
In an embodiment, the instructions which when executed by the one or more hardware processors further cause updating the weightage of each of the plurality of NLP engines based on a comparison of (i) one or more items amongst the second set of recommended items, and (ii) a fifth set of items; and sorting the second set of recommended items based on the updated weightage. 5
[036]
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS 10
[037]
The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles:
[038]
FIG. 1 depicts an exemplary natural language processing (NLP) based system for recommendation of items, in accordance with an embodiment of 15 the present disclosure.
[039]
FIG. 2 depicts an exemplary high level block diagram of the NLP based system of FIG. 1 for recommendation of items, in accordance with an embodiment of the present disclosure.
[040]
FIGS. 3A and 3B depict an exemplary flow chart illustrating a NLP 20 based method for recommendation of items, using the systems of FIGS. 1-2, in accordance with an embodiment of the present disclosure.
[041]
FIG. 4 depicts an exemplary merchandise taxonomy, in accordance with an embodiment of the present disclosure.
[042]
FIG. 5 depicts a graphical representation of recalled percentage 25 against labelled matches of items, in accordance with an embodiment of the present disclosure.
[043]
FIG. 6 depicts a graphical representation illustrating time taken for item matching (throughput) by the method of the present disclosure and conventional approach(es), in accordance with an embodiment of the present 30 disclosure.
13
DETAILED DESCRIPTION OF EMBODIMENTS
[044]
Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer 5 to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the scope of the disclosed embodiments.
[045]
As mentioned earlier, industries deal with diverse categories of products. For instance, the retail industry has diverse categories of products/items 10 such as food, fashion, alcohol, dairy, pantries, electronics, health, beauty, home improvement, office supplies, footwear, furniture, and so on. These categories are further sub-divided into multiple sub-categories with many levels to drill down with finer nuances of products. This gives rise to a display taxonomy for the products on e-commerce websites. This taxonomy may be either shallow or deep, based on a 15 scheme of things.
[046]
The complexity of product matching comes to the fore as there is no specified standard for the attributes used in product definition, hence the same varies with each competitor. The descriptions and images vary extensively, and language also differs if competitors are spread across geographies. The art of 20 matching products with certainty is critical to infer price gaps, which can significantly alter a retailer’s competitive landscape. Manually comparing product features is time-consuming and error-prone, leading to inaccurate results.
[047]
Embodiments of the present disclosure provide systems and methods that implement various natural language processing (NLP) engines for 25 recommendation of items. More specifically, items (e.g., first set of items and second set of items) pertaining to various entities (e.g., say retail and competitor’s) are fed as input to the system and pre-processed to obtain pre-processed dataset. Taxonomy code are then tagged to at least to a subset of items amongst the pre-processed dataset to obtain code tagged items. The code tagged items have one or 30 more associated attributes. The attributes are then converted to feature vectors
14
which are associated with items of the entities. Further, specific models are built
using the code tagged items and feature vectors. Using the specific models, (i) a first taxonomy level-based value, and (ii) the taxonomy code are predicted for each remaining item amongst the pre-processed dataset, respectively to obtain a third set of items. Further, features are extracted from the subset of items and the third set of 5 items. Further, the system 100 implements a plurality of NLP engines which process the taxonomy code, an associated taxonomy level, and a value associated with the one or more features in the NLP engines to obtain a first set of recommended items. Rules are then applied on the first set of recommended items to obtain a fourth set of items and items from the fourth set are grouped into various 10 categories for further recommendation of items (e.g., also referred to as second set of recommended items). This second set of recommended items is provided to the first entity (e.g., say a retailer) who can then analyse and perform a price and offer analysis in view of the second set of items of the second entity (e.g., say competition). 15
[048]
Referring now to the drawings, and more particularly to FIGS. 1 through 6, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method. 20
[049]
FIG. 1 depicts an exemplary natural language processing (NLP) based system 100 for recommendation of items, in accordance with an embodiment of the present disclosure. In an embodiment, the system 100 may also be referred to as ‘recommendation system’ or ‘items recommendation system’ and may be interchangeably used herein. In an embodiment, the system 100 includes one or 25 more hardware processors 104, communication interface device(s) or input/output (I/O) interface(s) 106 (also referred as interface(s)), and one or more data storage devices or memory 102 operatively coupled to the one or more hardware processors 104. The one or more processors 104 may be one or more software processing components and/or hardware processors. In an embodiment, the hardware 30 processors can be implemented as one or more microprocessors, microcomputers,
15
microcontrollers, digital signal processors, central processing units, state machines,
logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor(s) is/are configured to fetch and execute computer-readable instructions stored in the memory. In an embodiment, the system 100 can be implemented in a variety of computing 5 systems, such as laptop computers, notebooks, hand-held devices (e.g., smartphones, tablet phones, mobile communication devices, and the like), workstations, mainframe computers, servers, a network cloud, and the like.
[050]
The I/O interface device(s) 106 can include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and 10 the like and can facilitate multiple communications within a wide variety of networks N/W and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. In an embodiment, the I/O interface device(s) can include one or more ports for connecting a number of devices to one another or to another server. 15
[051]
The memory 102 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random-access memory (SRAM) and dynamic-random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. In an 20 embodiment, a database 108 is comprised in the memory 102, wherein the database 108 comprises information of items, associated categories pertaining to various entities (e.g., entity 1, entity 2, and so on). The database 108 further comprises taxonomy codes, taxonomy levels, attributes of the items, feature vectors of the items, and the like. The memory 102 stores various NLP engines which when 25 executed enable the system 100 to perform specific operations/steps of the method described herein. The memory 102 further comprises (or may further comprise) information pertaining to input(s)/output(s) of each step performed by the systems and methods of the present disclosure. In other words, input(s) fed at each step and output(s) generated at each step are comprised in the memory 102 and can be 30 utilized in further processing and analysis.
16
[052]
FIG. 2, with reference to FIG. 1, depicts an exemplary high level block diagram of the NLP based system 100 of FIG. 1 for recommendation of items, in accordance with an embodiment of the present disclosure.
[053]
FIGS. 3A and 3B, with reference to FIGS. 1 and 2, depict an exemplary flow chart illustrating a NLP based method for recommendation of 5 items, using the systems 100 of FIGS. 1-2, in accordance with an embodiment of the present disclosure. In an embodiment, the system(s) 100 comprises one or more data storage devices or the memory 102 operatively coupled to the one or more hardware processors 104 and is configured to store instructions for execution of steps of the method by the one or more processors 104. The steps of the method of 10 the present disclosure will now be explained with reference to components of the system 100 of FIG. 1, the block diagram of the system 100 depicted in FIG. 2, and the flow diagram as depicted in FIGS. 3A and 3B. Although process steps, method steps, techniques or the like may be described in a sequential order, such processes, methods, and techniques may be configured to work in alternate orders. In other 15 words, any sequence or order of steps that may be described does not necessarily indicate a requirement that the steps be performed in that order. The steps of processes described herein may be performed in any order practical. Further, some steps may be performed simultaneously.
[054]
At step 202 of the method of the present disclosure, the one or more 20 hardware processors 104 receive information comprising a first set of items pertaining to a first entity, and a second set of items pertaining to a second entity. The items may include but are not limited to products sold/selling/or to be sold by the first entity and the second entity. In an embodiment, the first entity may be a retailer and the second entity may be a competitor (also referred to as a 25 competition). It is to be understood by a person having ordinary skill in the art or person skilled in the art that such examples of items pertaining to products in retail domain shall not be construed as limiting the scope of present disclosure. In other words, the system 100 and the method of the present disclosure may be implemented across industry domains (e.g., manufacturing, healthcare, information 30
17
technology, and so on), including for services sold/selling/or to be sold by various
entities.
[055]
The first set of items (e.g., retailer items) and the second set of items (e.g., competitor’s items) are fed as an input to the system 100 as depicted in FIG. 2. Such items information may be obtained in the form of metadata, in one example 5 embodiment. Below Table 1 and Table 2 depict the first set of items (e.g., retailer items) and the second set of items (e.g., competitor’s items), respectively.
Table 1 (retailer’s items)
GLN
Party name
Country of origin
descriptionshort
tradeitemmeasurement_packingunit
4053213000110
XYZ LLC
CN
Gamestation
EA
4053213000110
XYZ LLC
CN
Gamestation_xshock
EA
…
…
…
…
…
4053213000110
XYZ LLC
CN
Tausunkt
EA
Table 2 (competitor’s items) 10
GLN
Competitor name
Country of origin
descriptionshort
Store address
2005105007507
ABC Corp
USA
X-MEN Active Protect ajándékcsomag
Location 1
18
2005105006613
ABC Corp
USA
Green Mojito & Cedarwood ajándékcsomag világÃtó cipÅ‘fűzÅ‘vel
Location 1
…
…
…
…
…
2005105007508
ABC Corp
USA
X-MEN Men Fresh ajándékcsomag
Location 1
[056]
It is to be understood by a person having ordinary skill in the art or person skilled in the art that such examples of items pertaining to the first entity and the second entity shall not be construed as limiting the scope of present disclosure. It is to be understood by a person having ordinary skill in the art or person skilled 5 in the art that information obtained as in the above tables shall not be construed as limiting the scope of present disclosure. In other words, other details such as ingredients, net content, online activity, purchasing group, validity, supplier identifier, supplier name, and the like may also be obtained. For the sake of brevity only fewer details are shown in the above Tables 1 and 2. 10
[057]
At step 204 of the method of the present disclosure, the one or more hardware processors 104 pre-process the information comprising the first set of items pertaining to the first entity and the second set of items pertaining to the second entity to obtain a pre-processed dataset. Below Tables 3 and 4 depict the pre-processed dataset pertaining to the first entity and the second entity. 15
Table 3 (pre-processed dataset of first entity)
item_id
product_name
Code
brand_name
variant_brandname
brand_type
country_of_origin
allergen
19
863
Primel
543564
Brand 1
NULL
Private Label
Hungary
NULL
969
Salatpflanzen
6762
Brand 2
NULL
Private Label
Austria
NULL
1058
Kindersnackgemüse
543560
Brand 3
NULL
Private Label
Austria
NULL
1059
Fruchtgemüse
543560
Brand 4
NULL
Private Label
Austria
NULL
1060
Tomatenraritaeten
543560
Brand 5
NULL
Private Label
Austria
NULL
…
…
…
…
…
…
…
…
1374
Schilchersturm 1.5l
9654
Brand 8
NULL
Private Label
Austria
Enthält Sulfite
Table 4 (pre-processed dataset of second entity)
item_id
product_name
GTCode
brand_name
brand_type
country_of_origin
allergen
1
Mayonnaise 50% Fett 0.5 L
1568
Brand 20
Manufacturer
NULL
Eier und daraus gewonnene Erzeugnisse, Senf und
20
daraus gewonnene Erzeugnisse
2
Leicht Mayonnaise 25% Fett 33 Portionen 0.5 L
1568
Brand 21
Manufacturer
NULL
Eier und daraus gewonnene Erzeugnisse, Senf und daraus gewonnene Erzeugnisse
3
Bio-Rapsöl kaltgepresst 0.5 L
2126
Brand 22
PrivateLabel
Sorgfältig hergestellt in Deutschland
NULL
4
Bio-Sonnenblumenöl kaltgepresst 0.5 L
2126
Brand 23
PrivateLabel
Sorgfältig hergestellt in Österreich
NULL
5
Darbo Sommersirup Holunderbl
9050
Brand 24
Manufacturer
NULL
NULL
21
üte Minze 0.5 L
…
…
…
…
…
…
…
16
Elit Elit 0.5 L
2107
Brand 29
Manufacturer
Lettland
NULL
[058]
It is to be understood by a person having ordinary skill in the art or person skilled in the art that items pertaining to the first entity and the second entity for Tables 1, 2, 3, and 4 are shown in different format and details for better understanding of the embodiments described herein and such examples shall not be 5 construed as limiting the scope of present disclosure.
[059]
At step 206 of the method of the present disclosure, the one or more hardware processors 104 obtain a taxonomy code (also referred to as ‘tc’ or ‘tcode’ and may be interchangeably used herein) to at least a subset of items amongst the pre-processed dataset to obtain a set of code tagged items. In an embodiment, each 10 code tagged item amongst the set of code tagged items is associated with one or more attributes. The taxonomy code is based on at least one of an associated item category and an associated item sub-category. Table 5 depicts items of various categories, and sub-categories at various taxonomy levels (e.g., say L1, L2, L3, … L7, and so on). 15
Table 5
category
subcategory
L1
L2
L3
L4
L5
L6
L7
taxonomyList
Taxonomy code
TaxonomyName
Food, Beverages &
Food Items
412
422
433
0
0
0
0
[412, 422, 433, 0, 0, 0, 0]
433
Nuts & Seeds
22
Tobacco
Food, Beverages & Tobacco
Food Items
412
422
427
1568
0
0
0
[412, 422, 427, 1568, 0, 0, 0]
1568
Mayonnaise
Health & Beauty
Personal Care
469
2915
473
474
2747
0
0
[469, 2915, 473, 474, 2747, 0, 0]
2747
Body Wash
Food, Beverages & Tobacco
Food Items
412
422
2660
2126
0
0
0
[412, 422, 2660, 2126, 0, 0, 0]
2126
Cooking Oils
Food, Beverages & Tobacco
Food Items
412
422
428
1954
9503
0
0
[412, 422, 428, 1954, 9503, 0, 0]
9503
yogurt, fruit
…
…
…
…
…
…
…
…
…
…
…
…
23
Food, Beverages & Tobacco
Food Items
412
422
428
429
30002820
0
0
[412, 422, 428, 429, 30002820, 0, 0]
30002820
GRATED
Table 6
item_id
product_name
TaxonomyCode
brand_name
brand_type
country_of_origin
competitor_id
product_details
…
allergens
1
Mayonnaise 50% Fett 0.5 L
1568
Brand 11
Manufacturer
NULL
41
- mit Freiland-… oder im Glas.
…
Eier und daraus gewonnene Erzeugnisse, Senf und daraus gewonnene Erzeugnisse
...2
Leicht Mayonnaise 25%
1568
Brand 12
Manufacturer
NULL
41
macht das …
Eiern
…
Eier und daraus gewon
24
Fett 33 Portionen 0.5 L
nene Erzeugnisse, Senf und daraus gewonnene Erzeugnisse
3
Bio-Rapsöl kaltgepresst 0.5 L
2126
Brand 13
PrivateLabel
Sorgfältig hergestellt in Deutschland
41
Das Bio-Rapsöl … Salatdressings.
…
NULL
4
Bio-Sonnenblumenöl kaltgepresst 0.5 L
2126
Brand 14
PrivateLabel
Sorgfältig hergestellt in Österreich
41
100% Qualität … Gemüse.
…
NULL
5
Darbo Sommersirup Holunderblüte
9050
Brand 15
Manufacturer
NULL
41
…
…
NULL
25
Minze 0.5 L
7
Rauch Happy Day Mango Sprizz 0.5 L
9061
Brand 16
Manufacturer
NULL
41
NULL
…
NULL
11
Green Tea with Honey 0.5 L
30001297
Brand 17
Manufacturer
NULL
41
NULL
…
NULL
14
Mautner Markhof Hesperiden Essig 0.5 L
2140
Brand 18
Manufacturer
Österreich
41
Hesperiden Essig ist eine… len.
…
Schwefeldioxid und Sulphite
15
Doppelherz aktiv Eisen Vital flüssig 0.5 L
525
Brand 19
Manufacturer
NULL
41
Eisen ist ein … werden.
…
NULL
[060]
FIG. 4, with reference to FIGS. 1 through 3B, depicts an exemplary merchandise taxonomy, in accordance with an embodiment of the present
26
disclosure.
Every item is tagged with the merchandise taxonomy code from levels L1 through L7 based on the applicability. Some items can have codes tagged to all 7 levels and some may have just (m-n) levels also. The leaf node represents the taxonomy code to start the item matching. The relationship among levels is shown below: 5
L7 ⊆ L6 ⊆ L5 ⊆ L4 ⊆ L3 ⊆ L2 ⊆ L1
[061]
At step 208 of the method of the present disclosure, the one or more hardware processors 104 convert, by using a sentence encoder, the one or more attributes comprised in the set of code tagged items into a feature vector. The feature vector is associated with the first set of items and the second set of items. Below 10 Table 7 and Table 8 depict conversion of attributes into a feature vector for both the first entity and the second entity, respectively.
Table 7 (item of first entity after vectorization – feature vector)
sellerId
itemId
L1
Taxonomy Code
brandName
additionalString
featureString
breadcrumbs
unitName
unitMeasure
bio_flag
cleanFeatureString
cleanAdditionalString
cleanBcString
cleanAsBcString
featureVector
41
10547
412
9120
https://www.inter.at/shop/lebensmi
lachs-fil
startseite
g
250
0
lachs filets zitr
lebensmittel lachs
tiefkühlungfleis
tiefkühlungfleis
[-0.07285787165
27
ttel/-lachs-filets-mit-zitronen-pfeffer-marinade-2-stueck-250g-packung/p/2020003331350
ets mit zitronen-pfeffer- … packung 250 g
> produkte > tiefkühlung > fleisch, fisch & meeresfrüchte > … packun
onen pfeffer marinade stück packung
filets mit zitronen pfeffer marinade stueck packung
ch fisch meeresfrüchtefisch
ch fisch meeresfrüchtefisch lebensmittel lachs filets mit zitronen pfeffer marinade stueck pac
164948, 0.0074628968723118305, -0.07725667953491211, 0.032168369740247726, -0.02816387452185154, 0.00016238
28
g 250 g
kung
2100825198]
Table 8 (item of second entity after vectorization – feature vector)
sellerId
itemId
L1
TaxonomyCode
brandName
additionalString
featureString
breadcrumbs
unitName
unitMeasure
bio_flag
cleanFeatureString
cleanAdditionalString
cleanBcString
cleanAsBcString
featureVector
41
10547
412
9120
https://www.inter.at/shop/lebensmittel/-lachs-filets-mit-zitronen-pfeffer-marinade-2-
lachs-filets mit …marinade
startseite > produkte …2 stück
g
250
0
lachs filets zitronen pfeffer marinade stück pac
lebensmittel lachs filets mit zitronen pfeffer marina
tiefkühlungfleisch fisch meeresfrüchtefisch
tiefkühlungfleisch fisch … filets mit ... stu
[-0.07285787165164948, 0.0074628968723118305, -0.07
29
stueck-250g-packung/p/2020003331350
… packung 250 g
250g packung 250 g
kung
de stueck packung
eck packung
725667953491211, 0.000162382100825198]
[062]
At step 210 of the method of the present disclosure, the one or more hardware processors 104 build a first model and a second model using the set of code tagged items and the feature vector. The first model may also be referred to as ‘level 1 classifier model or L1 classifier model and may be interchangeably used 5 herein. The second model may also be referred to as ‘taxonomy classifier model’ and may be interchangeably used herein. At step 212 of the method of the present disclosure, the one or more hardware processors 104 predict, by using the first model and the second model, (i) a first taxonomy level-based value, and (ii) the taxonomy code for each remaining item amongst the pre-processed dataset, 10 respectively to obtain a third set of items. Below Table 9 depicts the first model and the second model built and prediction of (i) the first taxonomy level-based value, and (ii) the taxonomy code, respectively (e.g., refer columns 3 and 4).
Table 9
selle
item
L1
Taxnom
bran
additionalString
featureS
breadcru
unitN
unitM
bio_
cleanFeatu
cleanAddition
cleanBc
cleanAsBc
featureVector
30
rId
Id
onyCode
dName
tring
mbs
ame
easure
flag
reString
alString
String
String
4
345
412
30002820
milfina
dairy käse & käseersatzprodukte käse gerieben & zerkleinert
emmentaler gerieben 250g
NONE
g
250
0
emmentaler gerieben
dairy käse käseersatzprodukte gerieben zerkleinert
dairy käse käseersatzprodukte gerieben zerkleinert
dairy käse käseersatzprodukte gerieben zerkleinert emmentaler
[-0.002662169747054577, 0.05370628461241722, 0.04652866721153259, -0.07189878076314
31
926, 0.040815941989421844]
41
10547
412
9120
https://www.inter.at/shop/lebensmittel/-lachs-filets-mit-zitronen-pfeffer-marinade-2-stueck-250g-packung/p/2020003331350
lachs-filets mit zitronen-pfeffer-marinade 2
startseite > produkte > tiefkühlung …2 stück 250g
g
250
0
lachs filets zitronen pfeffer marinade stück packung
lebensmittel lachs filets mit zitronen pfeffer marinade stueck packung
tiefkühlungfleisch fisch meeresfrüchtefisch
tiefkühlungfleisch fisch meeresfrüchtefisch lebensmittel lachs filets
[-0.07285787165164948, 0.0074628968723118305, -0.07725667953491211, 0.03216836
32
stück 250g packung 250 g
packung 250 g
mit zitronen pfeffer marinade stueck packung
9740247726, -0.02816387452185154, 0.000162382100825198]
[063]
At step 214 of the method of the present disclosure, the one or more hardware processors 104 extract one or more features from the subset of items, and the third set of items. In feature extraction, one or more attributes associated with the subset of items, and the third set of items are concatenated to obtain a concatenated string (e.g., information from each column from below Table 10 is 5 concatenated to obtain the concatenated string). Then keywords from a customized dictionary stored in database 108 are checked for its presence in the concatenated string for feature extraction. The matching keywords serve as the features that are extracted from the from the subset of items, and the third set of items. The customized database comprises various keywords pertaining to items information 10 and is built with the help of domain expert or subject matter expert. The customized database may be periodically updated with new keywords based on the incoming data or requests for providing item recommendation. A predefined attribute value
33
for each taxonomy code of the subset of items, and the third set of items
is then obtained. A comparison of keywords between the subset of items, and the third set of items is then performed. The one or more features are then extracted from the subset of items, and the third set of items based on the comparison and the predefined attribute value. Below Table 10 depicts item details for which feature 5 extraction is performed.
Table 10
item_id
product_name
GTCode
feature_summary
brand_name
brand_type
country_of_origin
competitor_id
breadcrumbs
product_details
Ingredients
allergens
1
Mayonnaise 50% Fett 0.5 L
1568
null
Manufacturer
NULL
41
Startseite > Produkte > Vorratsschrank > Grundnahrungsmittel > Ketchup & Mayonnaise > Mayonnaise 50%
- mit Freiland-Eiern macht das Beste aus Ihren Gerichten! Entdecken Sie unse
49% Sonnenblumenöl, Trinkwasser, 4,6% EIGELB², Glukosesirup, Weingeistessig,
Eier und daraus gewonnene Erzeugnisse, Senf und daraus ge
34
Fett 0,5 L
re Produkte - voller Geschmack und … Tube, im Beutel oder im Glas.
Weißweinessig, modifizierte … allergene Inhaltsstoffe.
wonnene Erzeugnisse
[064]
Table 11 depicts the various attributes (penultimate column) and attribute value (last column) for taxonomy code 1658 by way of examples:
Table 11
Taxonomycode
segmenttitle
brickcode
bricktitle
attributetitle
AttributeValue_split_Ge
1568
Food/Beverage/Tobacco
10006317
Mayonnaise/Mayonnaise Substitutes (Frozen)
Level of Fat Claim
fett
35
1568
Food/Beverage/Tobacco
10006317
Mayonnaise/Mayonnaise Substitutes (Frozen)
Level of Fat Claim
frei
1568
Food/Beverage/Tobacco
10006317
Mayonnaise/Mayonnaise Substitutes (Frozen)
Type of Mayonnaise/Mayonnaise Substitute
creme
…
…
…
…
…
…
1568
Food/Beverage/Tobacco
10006319
Mayonnaise/Mayonnaise Substitutes (Shelf Stable)
Type of Mayonnaise/Mayonnaise Substitute
salat
[065]
From the above Tables 10 and 11, a matching for the common keywords between the items is obtained by combining all attributes of the item(s) and a corpus is obtained as depicted in below Table 12.
Table 12
[ Mayonnaise 50% Fett 0.5 L,,Startseite > Produkte > Vorratsschrank > Grundnahrungsmittel > Ketchup & Mayonnaise > Mayonnaise 50% Fett 0,5 L, - mit Freiland-Eiern macht das Beste aus Ihren Gerichten! Entdecken Sie unsere Produkte - voller Geschmack und aus hochwertigen Zutaten gemacht: Mit nur 50% Fettgehalt ist Fein Mayonnaise die leichte Mayonnaise-Variante. Mit ihr lassen sich schmackhafte und sehr feine Salatkreationen zaubern. Unsere feine Mayonnaise gibt es in der praktischen Tube, im Beutel oder im Glas, 49% Sonnenblumenöl, Trinkwasser, 4,6% EIGELB², Glukosesirup, Weingeistessig, Weißweinessig, modifizierte Stärke, Zucker, Speisesalz, SENFSAAT, Gewürze, Zuckersirup, Konservierungsstoff: Kaliumsorbat, Säuerungsmittel: Citronensäure, Aromen, Farbstoff: Carotin. ²von Eiern aus Freilandhaltung. In
36
Großbuchstaben angegebene Zutaten enthalten allergene Inhaltsstoffe., Eier und daraus gewonnene Erzeugnisse, Senf und daraus gewonnene Erzeugnisse]
[066]
Using matching keywords from the above Table 12, features are extracted as shown in below Table 13.
Table 13
{'Level of Fat Claim': ['fett', 'frei'], 'Type of Mayonnaise/Mayonnaise Substitute': ['mayonnaise', 'salat']}
5
[067]
At step 216 of the method of the present disclosure, the one or more hardware processors 104 process the taxonomy code, an associated taxonomy level, and a value associated with the one or more features in the plurality of natural language processing (NLP) engines to obtain a first set of recommended items. Since system 100 utilizes a series of NLP engines as depicted in FIG. 2, the outputs 10 from the steps performed by each of these NLP engines (also referred to as artificial intelligence (AI) models) may be referred to as artificial intelligence (AI)-based output. In other words, the entire operations carried out by the NLP engines may be referred to as an AI pipeline to process the taxonomy code, an associated taxonomy level, and a value associated with the one or more features and intelligently obtain 15 the first set of recommended items. The outputs from each NLP engine when put together form the first set of recommended items. For instance, the first set of recommended items may comprise a first subset of recommended items by a first NLP engine, a second subset of recommended items by a second NLP engine, a third subset of recommended items by a third NLP engine, and a fourth subset of 20 recommended items from a fourth NLP engine. The step of processing by the first NLP engine (e.g., say similarity engine) amongst the plurality of NLP engines comprises filtering the second set of items for each item comprised in the first set of items based on the taxonomy code. A feature summary for the first set of items and the second set of items is then created based on the value of the one or more 25 features. The feature summary is then converted into the feature vector of the first set of items and the second set of items. A cosine similarity score is computed for
37
the first set of items and the second set of items based on the feature vector of the
first set of items and the second set of items. Then the first NLP engine recommends at least a few items (e.g., the first subset of recommended items) amongst the first set of recommended items based on the cosine similarity score. The above step of processing to obtain the first set of recommended items based on the cosine 5 similarity score by the first NLP engine (e.g., NLP engine 1) is better understood by way of following description. Table 14 depicts retailer’s items.
Table 14
retailer_item_id
product_name
TaxonomyCode
feature_summary
country_of_origin
sg_categorydescription
sg_commoditygroupdescription
sg_subcommoditygroupdescription
packaging_description
ingredientstatement
allergenstatement
345
Emmenr gerieben 250g
30002820
{'Consumer Lifestage': ['alle'], 'Formation': ['block', 'ger
NULL
Dairy
Kse & Kseersatzprodukte
Kse gerieben & zerkleinert
Schlauchbeutelfolie OPA/PE/15/50
MILCH, Stärke, Salz, Milchsäurebakterienkulturen, Propionsäurebakterien, Lab
NULL
38
ieben'], …, 'If in Sauce': ['NONE']}
[068]
Similarly, Table 15 depicts the second set of items (competitor’s items).
Table 15
competitor_item_id
product_name
Taxonomycode
feature_summary
country_of_origin
breadcrumbs
product_description
pdp_url
allergens
ingredients
7903
Schärdinger Bergkäse gerieben
30002820
{'Consumer Lifestage': ['alle'], 'Formation': ['geri
Österreich
Startseite > Produkte > Kühlregal >
Der würzig kräftige Bergkäse gerieben eignet sich
https://www.inter.at/shop/lebensmittel/schaerdinger-bergkaese-gerieben/p/2020002527945
Milch und daraus gewonnene Erzeugnisse
MILCH, Maisstärke, Salz, Lab, Käsereikult
39
200 G
eben'], … 'If in Sauce': ['NONE']}
Käse > Käse gerieben > Schärdinger Bergkäse gerieben 200 G
ideal zum überbacken von herzhafte Speisen.
(inkl. Laktose)
uren. In Großbuchstaben angegebene Zutaten enthalten allergene Inhaltsstoffe.
285177
bedda Granvegano zum Reiben
30002820
{'Consumer Lifestage': ['alter'], … Sauc
Deutschland
Alle Kategorien > Kühlware
bedda Granvegano ist eine leckere vegane …
https://shop.billa.at/produkte/bedda-granvegano-zum-reiben-00596299
NULL
modifizierte Stärke, Wasser, Kokosöl
40
150 g Packung
e': ['NONE']}
n > Käse, Aufstriche & Salate > Parmesan & Reibkäse
Würzen von frischen Salaten.
(24 %), Meersalz, Säureregulator: Calciumcitrat, Aroma, Olivenextrakt, Reisprotein.
27557
Schärdinger Emmentaler Ger
30002820
{'Consumer Lifestage': ['NONE'], … Sauc
Österreich
Alle Kategorien > Kühlware
Es muss mal wieder schnell gehen? …
https://shop.billa.at/produkte/schaerdinger-emmentaler-gerieben-00421626
Enthält - Ist im Produkt enthalten Milc
Zutaten: MILCH, Maisstärke, Salz,
41
ieben 200 g Packung
e': ['NONE']}
n > Käse, Aufstriche & Salate > Parmesan & Reibkäse
aber auch perfekt für Pasta.
h und Milcherzeugnisse
Lab, Käsereikulturen
…
…
…
…
…
…
…
…
…
…
601826
Perfect Italiano Grated Cheese Perfect Bakes \|
30002820
{'Consumer Lifestage': ['NONE'], … 'If in Sauce': ['NO
NULL
Dairy, Eggs & Fridge > Cheese > Gra
Perfect Italiano Cheese Perfect Bakes Grated
https://www.coles.com.au/product/perfect-italiano-grated-cheese-perfect-bakes-250g-3274024
Contains Milk
Cheese (Milk, Salt, Cultures, Enzyme), Anti
42
250g
NE']}
ted Cheese
250g3 Cheeses for a crisp, golden crust
caking Agent (460), Preservative (200).
[069]
Using Tables 14 and 15, top ‘x’ items are recommended by the first engine as depicted in Table 16 below by way of examples:
Table 16
retailer_item_id
competitor_item_id
retailer_item_name
competitor_item_name
aiscore
retailer_tcode
competitor_tcode
345
10574
Emmentaler gerieben 250g
Emmentaler gerieben 250 G
0.875331
30002820
30002820
345
10019
Emmentaler gerieben 250g
Pizza-Käse gerieben. leicht* 250 G
0.86802
30002820
30002820
345
7952
Emmentaler gerieben 250g
Schärdinger Gratinkäse 200 G
0.860829
30002820
30002820
43
345
8885
Emmentaler gerieben 250g
Cheddar rot gerieben 200 G
0.860282
30002820
30002820
[070]
The cosine similarity score as computed by the first NLP engine is shown in the above Table 16 in the ‘aiscore column’ for specific item. In the present disclosure, the cosine similarity score is computed by way of following description. Given two n-dimensional vectors of attributes, A and B, the cosine similarity, cos(θ), is represented using a dot product and magnitude as: 5
Σ𝐴𝑖𝐵𝑖𝑛𝑖=1√Σ𝐴𝑖2𝑛𝑖=1 √Σ𝐵𝑖2𝑛𝑖=1
where 𝐴𝑖 and 𝐵𝑖 are the ith components of vectors {A} and {B}, respectively. And cosine matching score CMS is in range [0,1].
[071]
Similarly, the step of processing by the second NLP engine (e.g., say taxonomy traversal engine) amongst the plurality of NLP engines is performed. 10 More specifically, for each taxonomy code, the system 100 traverses through the associated taxonomy level for determining a match between an item of the first set of items and an item of the second set of items to obtain a set of level-based items. Further, one or more attributes of the set of level-based items are concatenated to obtain a set of concatenated attributes. The set of concatenated attributes are 15 converted into the feature vector of the first set of items and the second set of items. Further, a cosine distance score between the first set of items and the second set of items based on the feature vector of the first set of items and the second set of items. A taxonomy based matching score is then computed based on the cosine distance score to obtain at least fewer items for recommendation (e.g., the second subset of 20 recommended items). In other words, the first set of recommended items are obtained based on the taxonomy based matching score. The above step of processing to obtain the first set of recommended items based on the taxonomy based matching score by the second NLP engine (e.g., NLP engine 2) is better understood by way of following description. Table 17 depicts retailer’s items by 25 way of examples:
44
Table 17
retailer_item_id
product_name
TaxonomyCode
feature_summary
country_of_origin
sg_categorydescription
sg_commoditygroupdescription
sg_subcommoditygroupdescription
packaging_description
ingredientstatement
allergenstatement
345
Emmentaler gerieben 250g
30002820
{'Consumer Lifestage': ['alle'], … ['NONE']}
NULL
Dairy
Kse & Kseersatzprodukte
Kse gerieben & zerkleinert
Schlauchbeutelfolie OPA/PE/15/50
MILCH, Stärke, Salz, Milchsäurebakterienkulturen, Propionsäurebakterien, Lab
NULL
[072]
Similarly, Table 18 depicts the second set of items (competitor’s items).
Table 18
competitor_item_id
product_name
Taxonomycode
feature_summary
country_of_origin
breadcrumbs
product_description
pdp_url
allergens
ingredients
45
350694
Mccain Superfries Straight Cut
9043
{'If Extruded': … Snack': ['chips']}
NULL
Specials > Low Price
NULL
https://www.woolworths.com.au/shop/productdetails/662933/mccain-superfries-straight-cut
NULL
NULL
351426
Dairyworks Grated 3 Cheese Mix
30002820
{'Consumer Lifestage': … ['NONE']}
NULL
Specials > Prices Dropped
NULL
https://www.woolworths.com.au/shop/productdetails/681853/dairyworks-grated-3-cheese-mix
NULL
NULL
350883
Perfect Italiano Grated Mozzarella
30002820
{'Consumer Lifestage': … ['NONE']}
NULL
Dairy, Eggs & Fridge > Cheese
NULL
https://www.woolworths.com.au/shop/productdetails/66922/perfect-italiano-grated-mozzarella
NULL
NULL
…
…
…
…
…
…
…
…
…
…
46
350693
Mccain Superfries Steak Cut
9043
{'If Extruded': … Snack': ['chips']}
NULL
Specials > Low Price
NULL
https://www.woolworths.com.au/shop/productdetails/662932/mccain-superfries-steak-cut
NULL
NULL
[073]
Using Tables 17 and 18, taxonomy level-based items are obtained in Table 19 by way of examples:
Table 19
category
subcategory
L1
L2
L3
L4
L5
L6
L7
TaxonomyList
Tcode (taxonomy code)
TaxonomdyName
Food, Beverages & Tobacco
Food Items
412
422
428
429
30002820
0
0
[412, 422, 428, 429, 30002820, 0, 0]
30002820
GRATED
[074]
Attributes from the above Table 19 are then concatenated and 5 converted into feature vector for the first entity and the second entity as depicted in Tables 20 and 21, respectively. More specifically, Table 20 depicts feature vector of the first set of items pertaining to the first entity, and Table 21 depicts feature vector of the second set of items pertaining to the second entity.
Table 20 10
47
sellerId
itemId
L1
TaxonomyCode
brandName
additionalString
featureString
breadcrumbs
unitName
unitMeasure
bio_flag
cleanFeatureString
cleanAdditionalString
cleanBcString
cleanAsBcString
featureVector
4
345
412
30002820
milfina
dairy käse & käseersatzprodukte käse … & zerkleinert
emmentaler gerieben 250g
NONE
g
250
0
emmentaler gerieben
dairy käse käseersatzprodukte gerieben zerkleinert
dairy käse käseersatzprodukte gerieben zerkleinert
dairy käse käseersatzprodukte … emmentaler
[-0.002662169747054577, … 0.040815941989421844]
Table 21
48
sellerId
itemId
L1
TaxonomyCode
brandName
additionalString
featureString
breadcrumbs
unitName
unitMeasure
bio_flag
cleanFeatureString
cleanAdditionalString
cleanBcString
cleanAsBcString
featureVector
41
10547
412
9120
https://www.inter.at/shop/lebensmittel/-lachs-filets-mit-zitronen-pfeffer-marinade-2-stueck-250g-packung/p/2020003331350
lachs-filets mit zitronen-pfeffer-… 250 g
startseite > produkte > tiefkühlung > fleisch, …
g
250
0
lachs filets zitronen pfeffer marinade stück packung
lebensmittel lachs filets mit zitronen pfeffer marinade stueck packung
tiefkühlungfleisch fisch meeresfrüchtefisch
tiefkühlungfleisch fisch meeresfrüchtefisch … stueck pack
[-0.07285787165164948, …, -… 0.02816387452185154, 0.000162382100825198]
49
250 g
ung
[075]
Using the feature vectors from Table 20 and Table 21, a cosine distance score between the first set of items and the second set of items is computed based on which a taxonomy based matching score is computed. The taxonomy based matching score is computed by way of following description. The item1 and item2 are represented as vectors 𝐴𝑖 and 𝐵𝑖 respectively. The matching score TMS 5 is derived as follows:
(1−𝐴𝑖.𝐵𝑖||𝐴𝑖||2||𝐵||2
[076]
Then, at least fewer items amongst the first set of recommended items are obtained based on the taxonomy based matching score. The recommended items with taxonomy based matching score (e.g., refer 3rd column in below Table 10 22) are depicted in Table 22 below:
Table 22
retailer_item_name
competitor_item_name
Aiscore (taxonomy based matching score)
retailer_tcode
competitor_tcode
Emmentaler gerieben 250g
Emmentaler gerieben 250 G
0.875331
30002820
30002820
Emmentaler gerieben 250g
Fallini Käse gerieben 40 G
0.748203
30002820
30002820
Emmentaler gerieben 250g
Bergbauern-Emmentaler in Scheiben 200 G
0.737946
30002820
30002837
50
Emmentaler gerieben 250g
bedda veganer Käseersatz
0.710421
30002820
9113
[077]
Similarly, the step of processing by the third NLP engine (e.g., say semantic engine) amongst the plurality of NLP engines is performed. More specifically, an index of the second set of items is created. A semantic match for a query item associated with the first set of items is identified in the index of the second set of items. A semantic matching score is then computed based on the 5 semantic match. In other words, semantic matching for the given retailer item name is searched in the index of the competitor items, and the Euclidean distance is computed between the query item and the item in the index which forms the semantic matching score.
[078]
The above step of computing the semantic matching score and 10 obtaining at least fewer items (e.g., the third subset of recommended items) amongst the first set of recommended items based on the semantic matching score is better understood by way of following description. Table 23 depicts the first set of items pertaining to the first entity (e.g., the retailer items).
Table 23 15
retailer_item_id
product_name
TaxonomyCode
feature_summary
country_of_origin
sg_categorydescription
sg_commoditygroupdescription
sg_subcommoditygroupdescription
packaging_description
ingredientstatement
allergenstatement
345
Emmentaler gerieben
30002820
{'Consumer Lifestage':
NULL
Dairy
Kse & Kseersatzprodukte
Kse gerieben & zerkleinert
Schlauchbeutelfolie OPA/PE/
MILCH, Stärke, Salz, Milchsäurebakteri
NULL
51
250g
['alle'], .. ['NONE']}
15/50
enkulturen, Propionsäurebakterien, Lab
[079]
For the sake of brevity, Table for the second set of items pertaining to the second entity (e.g., the competitor items) is not shown.
[080]
However, using both the Table 23 and competitor items (not shown), the semantic matching score is computed. First, Faiss IndexFlatL2 index is run, wherein this index is built for a set of items which need to be searched. This index 5 is referred to as the query item. The Euclidean distance (in Euclidean space – the vectors of item1 and item2 are denoted by 𝑞𝑖 and 𝑝𝑖) between the query item and the item in the index which forms the score is derived as follows.
√Σ(𝑞𝑖−𝑝𝑖)2𝑛𝑖=1
[081]
Semantic matching based score Ɛ [0,1]. Below Table 24 depicts at 10 least fewer items amongst the first set of recommended items that are obtained based on the semantic matching score. The recommended items with semantic matching score (e.g., refer 5th column in below Table 24) are depicted in Table 24 below:
Table 24 15
retailer_item_id
competitor_item_id
retailer_item_name
competitor_item_name
Aiscore (semantic matching
retailer_tcode
competitor_tcode
52
score)
345
3872
Emmentaler gerieben 250g
SalzburgMilch Emmentaler Scheiben 1 KG
0.7967
30002820
30001237
345
5011
Emmentaler gerieben 250g
Emmentaler in Scheiben 1 KG
0.7967
30002820
30001237
345
9193
Emmentaler gerieben 250g
Philadelphia mit Milka 175 G
0.755447
30002820
5785
345
8404
Emmentaler gerieben 250g
NÖM NÖM Cottage Cheese natur 200 G
0.754907
30002820
5785
345
10574
Emmentaler gerieben 250g
Emmentaler gerieben 250 G
0.875331
30002820
30002820
[082]
Similarly, the step of processing by the fourth NLP engine (e.g., say string match engine) amongst the plurality of NLP engines is performed. More specifically, a comparison of a name associated with each item amongst the first of items with each item amongst the second of items is performed and a string matching score is computed based on the comparison. 5
[083]
For the sake of brevity items of first entity and the second entity are not shown. However, in the present disclosure, the system 100 considered Table 23 which consisted of first set of items of the first entity (e.g., retailer’s items) for
53
string matching score computation. Similarly, the second set of items of the second
entity (e.g., the competitor’s item) are not shown but can be realized in practice. Item names of given retailer item are matched with competitor item, wherein CC = length of the longest common character set among the two item strings. This can happen to many substrings say n, wherein higher the string matching score (SMS) 5 value the greater is the likelihood of similarity between the items, wherein string matching score SMS [0,1]. For instance, given two item strings item Retailer (item 1) and item Competitor (item 2) the matching score that denotes the extent of similarity had been derived using the following formula:
SMS = (2*Σ𝐶𝐶𝑛𝑖=1)/ (|item1| + |item2|) 10
where SMS Ɛ [0,1]
[084]
Based on the above score computation, at least fewer items (e.g., the fourth subset of recommended items) amongst the first set of recommended items that are obtained based on the string matching score. The recommended items with string matching score (e.g., refer 5th column in below Table 25) are depicted in 15 Table 25 below:
Table 25
retailer_item_id
competitor_item_id
retailer_item_name
competitor_item_name
Aiscore (string matching score)
retailer_tcode
competitor_tcode
345
10574
Emmentaler gerieben 250g
Emmentaler gerieben 250 G
0.875331
30002820
30002820
54
345
240055
Emmentaler gerieben 250g
Emmentaler in Scheiben 400 G
0.842105
30002820
30001237
345
3872
Emmentaler gerieben 250g
SalzburgMilch Emmentaler Scheiben 1 KG
0.7967
30002820
30001237
345
5011
Emmentaler gerieben 250g
Emmentaler in Scheiben 1 KG
0.7967
30002820
30001237
345
7608
Emmentaler gerieben 250g
Schärdinger Emmentaler geräuchert in Scheiben 150g 150 G
0.734694
30002820
30001237
[085]
Once the first set of recommended items are obtained as shown above, one or more rules on the first set of recommended items to obtain a fourth set of items at step 218. Each rule is associated with at least one NLP engine amongst the plurality of NLP engines. Below Table 26 depicts illustrative rules applied on the first set of recommended items to obtain the fourth set of items. 5
Table 26
Rule
Rule description
1
Country of origin should match
2
Ingredient such as Fat content, Sugar, Salt should match
3
Allergen value - Gluten free should match
55
4
Organic product should be compared with only Organic products
5
Manufacturer brand should be compared with Manufacturer brand e.g. Coca cola with coca cola
6
Grammages content should be comparable with (+/-) 50% range e.g. 500 gm product can be compared with max. 750 gm or 250 gm for potential match
[086]
It is to be understood by a person having ordinary skill in the art or person skilled in the art that the above rules are representative and such rules shall not be construed as limiting the scope of the present disclosure. Further, at step 220 of the method of the present disclosure, the one or more hardware processors 104 group one or more items from the fourth set of items into one or more categories, 5 and at least a subset of items amongst the fourth set of items are recommended to obtain a second set of recommended items at step 222. The second set of recommended items is based on a weightage associated to each of the plurality of NLP engines, in one embodiment of the present disclosure.
[087]
Table 27 depicts the second set of recommended items that are 10 categorized into various categories by the NLP engines.
Table 27
retailer_item_id
competitor_item_id
retailer_item_name
competitor_item_name
retailer_tcode
competitor_tcode
bucket
345
10574
Emmen gerieben 250g
Emmentaler gerieben 250 G
30002820
30002820
1
56
345
5011
Emmen gerieben 250g
Emmentaler in Scheiben 1 KG
30002820
30001237
3
345
10019
Emmen gerieben 250g
Pizza-Käse gerieben. leicht* 250 G
30002820
30002820
4
345
7952
Emmen gerieben 250g
Schärdinger Gratinkäse 200 G
30002820
30002820
4
345
8885
Emmen gerieben 250g
Cheddar rot gerieben 200 G
30002820
30002820
4
345
240055
Emmen gerieben 250g
Emmentaler in Scheiben 400 G
30002820
30001237
4
345
7953
Emmen gerieben 250g
Schärdinger Spätzlekäse 200 G
30002820
30002820
4
345
3872
Emmen gerieben 250g
SalzburgMilch Emmentaler Scheiben 1 KG
30002820
30001237
3
[088]
In the above table 27, the bucketing is referred to as grouping of items into various categories. For instance, items are grouped into a first category based on an item comprised in the first set of recommended items that is recommended by a first combination of NLP engines. In other words, match items which are recommended by all engines (e.g., (e.g., say first engine, second engine, 5 third engine, and fourth engine) are put into bucket 1. Similarly, items are grouped into a second category based on an item comprised in the first set of recommended
57
items that is recommended by a second combination of NLP engines. In other
words, match items which are recommended by any 3 NLP engines (e.g., say (i) first engine, second engine, and fourth engine, or (ii) first engine, second engine, and third engine, or (ii) second engine, third engine, and fourth engine, or iv) first engine, third engine, and fourth engine) are put into bucket 2. Further, items are 5 grouped into a third category based on an item comprised in the first set of recommended items that is recommended by a third combination of NLP engines. In other words, matched items which are recommended by any 2 engines (e.g., (i) first engine and second engine, or (ii) first engine and third engine, or (iii) first engine and fourth engine, or (iv) second engine and third engine, or (v) second 10 engine and fourth engine, or (vi) third engine and fourth engine) are put into bucket 3. Furthermore, items are grouped into a fourth category based on an item comprised in the first set of recommended items that is recommended by a NLP engine. In other words, matched items which are recommended by one engine (e.g., only first engine, or only second engine, or only third engine, or only fourth engine) 15 are put into bucket 4. Bucket 4 contains the non-overlapping matches, thus contains the highest number of recommendations. To limit the recommendation to count of 3, the system 100 considers bucket 4 by giving weightage to String, Taxonomy, Semantic and Similarity in sequence. In other words, the weightage associated to each of the plurality of NLP engines is determined based on a match of an item 20 comprised in the fourth set of items with an associated item amongst the second set of items and accordingly the second set of recommended items are obtained in specific order. Further, the weightage of each of the plurality of NLP engines are updated based on a comparison of (i) one or more items amongst the second set of recommended items, and (ii) a fifth set of items. For instance, the second set of 25 recommended items are validated by a domain expert or subject matter expert. In other words, updated weights are obtained based on a small set of item matches after comparing them with human validated matches. The second set of recommended items are then sorted based on the updated weightage. In other words, the matches are sorted in a specific order (e.g., say in descending order) 30
58
based on the updated weights
. The higher the weightage for the NLP engine(s), it would be getting higher priority for that match.
[089]
FIG. 5, with reference to FIGS. 1 through 4, depicts a graphical representation of recalled percentage against labelled matches of items, in accordance with an embodiment of the present disclosure. As depicted in FIG. 5, 5 with the method of the present disclosure, the accuracy has increased by an average of 4.5% on all categories as compared to conventional approach.
[090]
FIG. 6, with reference to FIGS. 1 through 5, depicts a graphical representation illustrating time taken for item matching (throughput) by the method of the present disclosure and conventional approach(es), in accordance with an 10 embodiment of the present disclosure. With conventional approach, it used to take about 559.68 hours to complete the process (represented by the bar graph), whereas with method of the present disclosure the pipeline takes 3.5 hours to process and give recommendation for all categories (represented by the line graph), there by substantially reducing the time taken. Below Table 28 depicts various values 15 illustrated in the graphical representation of FIG. 6.
Table 28
Category name
Item Count
Time Taken by conventional approaches (in hrs)
Time Taken by method of the present disclosure (in hrs)
Alcoholic Beverages
762
63.50
3.50
Bakery
650
54.17
Dairy
783
65.25
Chilled conveniences
500
41.67
Fresh meat and Fish
886
73.83
Freezer
492
41.00
59
Pantry
875
72.92
Non-Alcohol beverages
845
70.42
snacking
923
76.92
Total time taken
6716
559.68
3.50
[091]
The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do 5 not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims.
[092]
It is to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for 10 implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g., any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g., hardware means 15 like e.g., an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g., an ASIC and an FPGA, or at least one microprocessor and at least one memory with software processing components located therein. Thus, the means can include both hardware means and software means. The method embodiments described herein 20 could be implemented in hardware and software. The device may also include software means. Alternatively, the embodiments may be implemented on different hardware devices, e.g., using a plurality of CPUs.
[093]
The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not 25 limited to, firmware, resident software, microcode, etc. The functions performed by
60
various
components described herein may be implemented in other components or combinations of other components. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. 5
[094]
The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily 10 defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such 15 alternatives fall within the scope of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be 20 noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
[095]
Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which 25 information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude 30 carrier waves and transient signals, i.e., be non-transitory. Examples include
61
random access memory (RAM), read
-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
[096]
It is intended that the disclosure and examples be considered as exemplary only, with a true scope of disclosed embodiments being indicated by the 5 following claims.We Claim:
1. A processor implemented method, comprising:
receiving, via one or more hardware processors, information comprising a first set of items pertaining to a first entity, and a second set of items pertaining to a second entity (202);
pre-processing, via the one or more hardware processors, the information comprising the first set of items pertaining to the first entity and the second set of items pertaining to the second entity to obtain a pre-processed dataset (204);
obtaining, via the one or more hardware processors, a taxonomy code to at least a subset of items amongst the pre-processed dataset to obtain a set of code tagged items (206), wherein each code tagged item amongst the set of code tagged items is associated with one or more attributes;
converting, by using a sentence encoder via the one or more hardware processors, the one or more attributes comprised in the set of code tagged items into a feature vector (208), wherein the feature vector is associated with the first set of items and the second set of items;
building, via the one or more hardware processors, a first model and a second model using the set of code tagged items and the feature vector (210);
predicting, by using the first model and the second model via the one or more hardware processors, (i) a first taxonomy level-based value, and (ii) the taxonomy code for each remaining item amongst the pre-processed dataset, respectively to obtain a third set of items (212);
extracting, via the one or more hardware processors, one or more features from the subset of items, and the third set of items (214);
processing, via the one or more hardware processors, the taxonomy code, an associated taxonomy level, and a value associated with the one or more features in a plurality of natural language processing (NLP) engines to obtain a first set of recommended items (216);
applying, via the one more hardware processors, one or more rules on the first set of recommended items to obtain a fourth set of items, wherein each rule is
associated with at least one NLP engine amongst the plurality of NLP engines (218);
grouping, via the one more hardware processors, one or more items from the fourth set of items into one or more categories (220); and
recommending, via the one or more hardware processors, at least a subset of items amongst the fourth set of items to obtain a second set of recommended items (222), wherein the second set of recommended items is based on a weightage associated to each of the plurality of NLP engines.
2. The processor implemented method as claimed in claim 1, wherein the step of obtaining the taxonomy code is based on at least one of an associated item category and an associated item sub-category.
3. The processor implemented method as claimed in claim 1, wherein the step of extracting the one or more features from the subset of items, and the third set of items comprises:
concatenating one or more attributes associated with the subset of items, and the third set of items;
obtaining a predefined attribute value for each taxonomy code of the subset of items, and the third set of items;
performing a comparison of keywords between the subset of items, and the third set of items; and
extracting the one or more features from the subset of items, and the third set of items based on the comparison and the predefined attribute value.
4. The processor implemented method as claimed in claim 1, wherein the
step of processing by a first NLP engine amongst the plurality of NLP engines
comprises:
filtering the second set of items for each item comprised in the first set of items based on the taxonomy code;
creating a feature summary for the first set of items and the second set of items based on the value of the one or more features;
converting the feature summary into the feature vector of the first set of items and the second set of items;
computing a cosine similarity score for the first set of items and the second set of items based on the feature vector of the first set of items and the second set of items; and
obtaining the first set of recommended items based on the cosine similarity score.
5. The processor implemented method as claimed in claim 1, wherein the
step of processing by a second NLP engine amongst the plurality of NLP engines comprises:
for each taxonomy code:
traversing through the associated taxonomy level for determining a match between an item of the first set of items and an item of the second set of items to obtain a set of level-based items;
concatenating one or more attributes of the set of level-based items to obtain a set of concatenated attributes;
converting the set of concatenated attributes into the feature vector of the first set of items and the second set of items;
computing a cosine distance score between the first set of items and the second set of items based on the feature vector of the first set of items and the second set of items;
computing a taxonomy based matching score based on the cosine distance score; and
obtaining the first set of recommended items based on the taxonomy based matching score.
6. The processor implemented method as claimed in claim 1, wherein the
step of processing by a third NLP engine amongst the plurality of NLP engines
comprises:
creating an index of the second set of items;
identifying a semantic match for a query item associated with the first set of items in the index of the second set of items;
computing a semantic matching score based on the semantic match; and
obtaining the first set of recommended items based on the semantic matching score;
7. The processor implemented method as claimed in claim 1, wherein the
step of processing by a fourth NLP engine amongst the plurality of NLP engines
comprises:
performing a comparison of a name associated with each item amongst the
first of items with each item amongst the second of items;
computing a string matching score based on the comparison; and obtaining the first set of recommended items based on the string matching
score.
8. The processor implemented method as claimed in claim 1, wherein the
step of grouping, comprises:
grouping one or more items into a first category based on an item comprised in the first set of recommended items that is recommended by a first combination of NLP engines;
grouping one or more items into a second category based on an item comprised in the first set of recommended items that is recommended by a second combination of NLP engines;
grouping one or more items into a third category based on an item comprised in the first set of recommended items that is recommended by a third combination of NLP engines; and
grouping one or more items into a fourth category based on an item comprised in the first set of recommended items that is recommended by a NLP engine.
9. The processor implemented method as claimed in claim 1, wherein the
weightage associated to each of the plurality of NLP engines is determined based
on a match of an item comprised in the fourth set of items with an associated item
amongst the second set of items.
10. The processor implemented method as claimed in claim 1, comprising:
updating the weightage of each of the plurality of NLP engines based on a
comparison of (i) one or more items amongst the second set of recommended items, and (ii) a fifth set of items; and
sorting the second set of recommended items based on the updated weightage.
11. A system (100), comprising:
a memory (102) storing instructions;
one or more communication interfaces (106); and
one or more hardware processors (104) coupled to the memory (102) via the one or more communication interfaces (106), wherein the one or more hardware processors (104) are configured by the instructions to:
receive information comprising a first set of items pertaining to a first entity, and a second set of items pertaining to a second entity;
pre-process the information comprising the first set of items pertaining to the first entity and the second set of items pertaining to the second entity to obtain a pre-processed dataset;
obtain a taxonomy code to at least a subset of items amongst the pre-processed dataset to obtain a set of code tagged items, wherein each code tagged item amongst the set of code tagged items is associated with one or more attributes;
convert, by using a sentence encoder, the one or more attributes comprised in the set of code tagged items into a feature vector, wherein the feature vector is associated with the first set of items and the second set of items;
build a first model and a second model using the set of code tagged items and the feature vector;
predict, by using the first model and the second model, (i) a first taxonomy level-based value, and (ii) the taxonomy code for each remaining item amongst the pre-processed dataset, respectively to obtain a third set of items;
extract one or more features from the subset of items, and the third set of items;
process the taxonomy code, an associated taxonomy level, and a value associated with the one or more features in a plurality of natural language processing (NLP) engines to obtain a first set of recommended items;
apply one or more rules on the first set of recommended items to obtain a fourth set of items, wherein each rule is associated with at least one NLP engine amongst the plurality of NLP engines;
group one or more items from the fourth set of items into one or more categories; and
recommend at least a subset of items amongst the fourth set of items to obtain a second set of recommended items, wherein the second set of recommended items is based on a weightage associated to each of the plurality of NLP engines.
12. The system as claimed in claim 11, wherein the taxonomy code is based on at least one of an associated item category and an associated item sub-category.
13. The system as claimed in claim 11, wherein the one or more features are extracted from the subset of items, and the third set of items by
concatenating one or more attributes associated with the subset of items, and the third set of items;
obtaining a predefined attribute value for each taxonomy code of the subset of items, and the third set of items;
performing a comparison of keywords between the subset of items, and the third set of items; and
extracting the one or more features from the subset of items, and the third set of items based on the comparison and the predefined attribute value.
14. The system as claimed in claim 11, wherein a first NLP engine amongst
the plurality of NLP engines processes the taxonomy code, the associated
taxonomy level, and the value associated with the one or more features by:
filtering the second set of items for each item comprised in the first set of items based on the taxonomy code;
creating a feature summary for the first set of items and the second set of items based on the value of the one or more features;
converting the feature summary into the feature vector of the first set of items and the second set of items;
computing a cosine similarity score for the first set of items and the second set of items based on the feature vector of the first set of items and the second set of items; and
obtaining the first set of recommended items based on the cosine similarity score.
15. The system as claimed in claim 11, wherein a second NLP engine amongst
the plurality of NLP engines processes the taxonomy code, the associated
taxonomy level, and the value associated with the one or more features by:
for each taxonomy code:
traversing through the associated taxonomy level for determining a match between an item of the first set of items and an item of the second set of items to obtain a set of level-based items;
concatenating one or more attributes of the set of level-based items to obtain a set of concatenated attributes;
converting the set of concatenated attributes into the feature vector of the first set of items and the second set of items;
computing a cosine distance score between the first set of items and the second set of items based on the feature vector of the first set of items and the second set of items;
computing a taxonomy based matching score based on the cosine distance score; and
obtaining the first set of recommended items based on the taxonomy based matching score.
16. The system as claimed in claim 11, wherein a third NLP engine amongst
the plurality of NLP engines processes the taxonomy code, the associated
taxonomy level, and the value associated with the one or more features by:
creating an index of the second set of items;
identifying a semantic match for a query item associated with the first set of items in the index of the second set of items;
computing a semantic matching score based on the semantic match; and
obtaining the first set of recommended items based on the semantic matching score.
17. The system as claimed in claim 11, wherein a fourth NLP engine amongst
the plurality of NLP engines processes the taxonomy code, the associated
taxonomy level, and the value associated with the one or more features by:
performing a comparison of a name associated with each item amongst the
first of items with each item amongst the second of items;
computing a string matching score based on the comparison; and obtaining the first set of recommended items based on the string matching
score.
18. The system as claimed in claim 11, wherein the one or more categories are
obtained:
grouping one or more items into a first category based on an item comprised in the first set of recommended items that is recommended by a first combination of NLP engines;
grouping one or more items into a second category based on an item comprised in the first set of recommended items that is recommended by a second combination of NLP engines;
grouping one or more items into a third category based on an item comprised in the first set of recommended items that is recommended by a third combination of NLP engines; and
grouping one or more items into a fourth category based on an item comprised in the first set of recommended items that is recommended by a NLP engine.
19. The system as claimed in 11, wherein the weightage associated to each of the plurality of NLP engines is determined based on a match of an item comprised in the fourth set of items with an associated item amongst the second set of items.
20. The system as claimed in claim 11, wherein the one or more hardware processors are further configured by the instructions to
update the weightage of each of the plurality of NLP engines based on a comparison of (i) one or more items amongst the second set of recommended items, and (ii) a fifth set of items; and
sort the second set of recommended items based on the updated weightage.
| # | Name | Date |
|---|---|---|
| 1 | 202421003421-STATEMENT OF UNDERTAKING (FORM 3) [17-01-2024(online)].pdf | 2024-01-17 |
| 2 | 202421003421-REQUEST FOR EXAMINATION (FORM-18) [17-01-2024(online)].pdf | 2024-01-17 |
| 3 | 202421003421-FORM 18 [17-01-2024(online)].pdf | 2024-01-17 |
| 4 | 202421003421-FORM 1 [17-01-2024(online)].pdf | 2024-01-17 |
| 5 | 202421003421-FIGURE OF ABSTRACT [17-01-2024(online)].pdf | 2024-01-17 |
| 6 | 202421003421-DRAWINGS [17-01-2024(online)].pdf | 2024-01-17 |
| 7 | 202421003421-DECLARATION OF INVENTORSHIP (FORM 5) [17-01-2024(online)].pdf | 2024-01-17 |
| 8 | 202421003421-COMPLETE SPECIFICATION [17-01-2024(online)].pdf | 2024-01-17 |
| 9 | 202421003421-FORM-26 [16-03-2024(online)].pdf | 2024-03-16 |
| 10 | Abstract1.jpg | 2024-03-22 |
| 11 | 202421003421-Proof of Right [12-06-2024(online)].pdf | 2024-06-12 |
| 12 | 202421003421-Request Letter-Correspondence [19-02-2025(online)].pdf | 2025-02-19 |
| 13 | 202421003421-Power of Attorney [19-02-2025(online)].pdf | 2025-02-19 |
| 14 | 202421003421-Form 1 (Submitted on date of filing) [19-02-2025(online)].pdf | 2025-02-19 |
| 15 | 202421003421-Covering Letter [19-02-2025(online)].pdf | 2025-02-19 |
| 16 | 202421003421-FORM-26 [22-05-2025(online)].pdf | 2025-05-22 |