Specification
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION (See Section 10 and Rule 13)
Title of invention:
METHOD AND SYSTEM FOR PRODUCT DATA CATEGORIZATION BASED ON TEXT AND ATTRIBUTE FACTORIZATION
Applicant
Tata Consultancy Services Limited A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th floor,
Nariman point, Mumbai 400021,
Maharashtra, India
Preamble to the description
The following specification particularly describes the invention and the manner in which it is to be performed.
TECHNICAL FIELD [001] The disclosure herein generally relates to product categorization, and, more particularly, to method and system for product data categorization based on text and attribute factorization.
BACKGROUND
[002] E-commerce cognitive retail solution is booming with technological advances made in UI design, availability of products, efficient delivery of products and thereof. E-commerce websites maintain a taxonomy of products so that each product can be effectively presented and merchandized to end customers. Due to online shopping and tactile judgment, fashion e-retailers use visual (e.g., pictures and videos) and textual information to reduce consumers’ perceived risk and uncertainty, with preferred choice. There is a growing trend to sell many types of consumer products through e-commerce websites in order to maintain or enhance a company's competitiveness, and sometimes to establish a niche market. For products such as footwear, manufacturers face quite a challenge to provide consumers with good fitting and type of shoes. Due to the plethora footwear types/ styles offered in the marketplace, it becomes impossible to examine and identify the exact type.
[003] Existing techniques build a product taxonomy using word embedding technique based on semantics which require feeding all taxonomy levels of products to obtain a categorization of each product under the levels of taxonomy. Performing a mere word embedding for semantics with a set of given products, might not yield an expected outcome under consideration. Further, the algorithm internally follows a continuous bag of words approach to fetch semantically similar words or product. Hence, such approaches lack in performing categorization type of the product.
SUMMARY [004] Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical
problems recognized by the inventors in conventional systems. For example, in one embodiment, a system for product data categorization based on text and attribute factorization is provided. The method acquires an input describing a set of product data from an application data store for categorization. The set of product data are preprocessed by removing extraneous text based on a predefined template. Then, a dictionary is created for the set of product data based on a set of attributes comprising a product key with its corresponding product value. Further, for the set of product data a multi-level contextual data is extracted by assigning a weight to each product data based on likelihood and creating a set of datapoints for each product data and the set of product data are categorized by feeding the set of data points to a set of predefined parameters to compute a minimum count, a total size, total number of epochs, a skip gram value and a hierarchical softmax.
[005] In one embodiment extracting the multi-level contextual data comprises performing, a similarity match for each product data with a training dataset associated with a pretrained Word2Vec model. Further, a weight is assigned to each product data for the closest similarity semantic match based on (i) a product weight, and (ii) a product category weight. The set of product data are pivoted to obtain counts based on the product category and reindexing the assigned weights to align with the pivot table index. The set of data points are created based on the assigned weights using the pivot table index for each product categorization.
[006] In another aspect, a method for product data categorization based on text and attribute factorization is provided. The method acquires an input describing a set of product data from an application data store for categorization. The set of product data are preprocessed by removing extraneous text based on a predefined template. Then, a dictionary is created for the set of product data based on a set of attributes comprising a product key with its corresponding product value. Further, for the set of product data a multi-level contextual data is extracted by assigning a weight to each product data based on likelihood and creating a set of datapoints for each product data and the set of product data are categorized by feeding the set of data points to a set of predefined parameters to compute a minimum count, a total size, total number of epochs, a skip gram value and a hierarchical softmax.
[007] In one embodiment extracting the multi-level contextual data comprises performing, a similarity match for each product data with a training dataset associated with a pretrained Word2Vec model. Further, a weight is assigned to each product data for the closest similarity semantic match based on (i) a product weight, and (ii) a product category weight. The set of product data are pivoted to obtain counts based on the product category and reindexing the assigned weights to align with the pivot table index. The set of data points are created based on the assigned weights using the pivot table index for each product categorization.
[008] In yet another aspect, provides one or more non-transitory machine-readable information storage mediums comprising one or more instructions, which when executed by one or more hardware processors perform actions includes to import libraries and a defined model to create a list of model vocabulary. The method acquires an input describing a set of product data from an application data store for categorization. The set of product data are pre-processed by removing extraneous text based on a predefined template. Then, a dictionary is created for the set of product data based on a set of attributes comprising a product key with its corresponding product value. Further, for the set of product data a multi-level contextual data is extracted by assigning a weight to each product data based on likelihood and creating a set of datapoints for each product data and the set of product data are categorized by feeding the set of data points to a set of predefined parameters to compute a minimum count, a total size, total number of epochs, a skip gram value and a hierarchical softmax.
[009] In one embodiment extracting the multi-level contextual data comprises performing, a similarity match for each product data with a training dataset associated with a pretrained Word2Vec model. Further, a weight is assigned to each product data for the closest similarity semantic match based on (i) a product weight, and (ii) a product category weight. The set of product data are pivoted to obtain counts based on the product category and reindexing the assigned weights to align with the pivot table index. The set of data points are created based on the assigned weights using the pivot table index for each product categorization.
[010] It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS [011] The accompanying drawings, which are incorporated in and
constitute a part of this disclosure, illustrate exemplary embodiments and, together
with the description, serve to explain the disclosed principles:
[012] FIG. 1 illustrates an exemplary system for product data
categorization based on text and attribute factorization in accordance with some
embodiments of the present disclosure.
[013] FIG. 2 illustrates a flow diagram showing a method for product data
categorization based on text and attribute factorization in accordance with some
embodiments of the present disclosure.
DETAILED DESCRIPTION OF EMBODIMENTS [014] Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the scope of the disclosed embodiments.
[015] Embodiments herein provide a method and system for product data categorization based on text and attribute factorization. The method provides an autonomous approach for type-categorization of product data based on numerous text and attribute factorization for the input received as user query. Cognitive retail solution enables categorizing each product data based on text. The system obtains a user query as input and processes the user query by extracting multi-level contextual data for categorization. The user query describes a set of product data from one or more application data store for categorization by extracting multi-level
contextual data. The disclosed system 100 is further explained with the method as described in conjunction with FIG.1 to FIG.2 below.
[016] Referring now to the drawings, and more particularly to FIG. 1 through FIG.2, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method.
[017] FIG. 1 illustrates an exemplary system for product data categorization based on text and attribute factorization in accordance with some embodiments of the present disclosure. In an embodiment, the system 100 includes one or more hardware processors 104, communication interface device(s) or input/output (I/O) interface(s) 106 (also referred as interface(s)), and one or more data storage devices or memory 102 operatively coupled to the one or more hardware processors 104. The one or more processors 104 may be one or more software processing components and/or hardware processors. In an embodiment, the hardware processors can be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor(s) is/are configured to fetch and execute computer-readable instructions stored in the memory. In an embodiment, the system 100 can be implemented in a variety of computing systems, such as laptop computers, notebooks, hand-held devices, workstations, mainframe computers, servers, a network cloud, and the like.
[018] The I/O interface device(s) 106 can include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like and can facilitate multiple communications within a wide variety of networks N/W and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. In an embodiment, the I/O interface device(s) can include one or more ports for connecting a number of devices to one another or to another server.
[019] The memory 102 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random-access memory (SRAM) and dynamic-random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. The memory 102 further comprises (or may further comprise) information pertaining to input(s)/output(s) of each step performed by the systems and methods of the present disclosure. In other words, input(s) fed at each step and output(s) generated at each step are comprised in the memory 102 and can be utilized in further processing and analysis.
[020] FIG. 2 illustrates a flow diagram showing a method for product data categorization based on text and attribute factorization in accordance with some embodiments of the present disclosure. In an embodiment, the system 100 comprises one or more data storage devices or the memory 102 operatively coupled to the processor(s) 104 and is configured to store instructions for execution of steps of the method 200 by the processor(s) or one or more hardware processors 104. The steps of the method 200 of the present disclosure will now be explained with reference to the components or blocks of the alarm identification system 100 as depicted in FIG.2. Although process steps, method steps, techniques or the like may be described in a sequential order, such processes, methods and techniques may be configured to work in alternate orders. In other words, any sequence or order of steps that may be described does not necessarily indicate a requirement that the steps to be performed in that order. The steps of processes described herein may be performed in any order practical. Further, some steps may be performed simultaneously.
[021] At step 202 of the method 200 the one or more hardware processors 104 acquire an input describing a set of product data from an application data store for categorization. The system 100 is initialized by importing libraries and a defined model to create a list of model vocabulary. The libraries to be preloaded are obtained from the application datastore such as a GENSIM, a PANDAS, and a NLTK. In one embodiment, the libraries include the NLTK with 3.2.5 version, the
GENSIM with 8.1 version, and the PANDAS with 0.24.2 version. The NLTK package has a vocabulary library that aids the algorithm to understand words in English language. The GENSIM package has the Word2Vec model in it, from where the model is imported & loaded. The PANDAS package aids in opening the data files for processing. Once, the libraries are loaded the user query is processed, wherein the set of product data includes for example, in a fashion retail, shoes dictionary types are {casual :{ sneaker, slides, moccasins, belle}, formal: { classic pumps, mules,}}. The user query (Table 1) is considered in the below dataset,
Table 1 – Example dataset
boots slippers sandals sneakers
booties clogs wedges High heel sandals
Over the knee boots pumps stilettos flats
The given dictionary data is assigned with weights for generating probability
scores in order to show the relation of each product with the category as, x <- c (0.85, 0.80, 0.75, 0.70)
x’ <- c (0.85, 0.80)
y <-c (0.80, 0.80)
names(x) <- c ("sneaker", "slides", "moccasins", "belle")
names(x’) <- c ("classic pumps", "mules")
names(y) <- c (“casual”, “formal”) The weights may vary with iterations based on model performance. This dictionary data is built for contextual understanding of the training set, in order to guide the algorithm on the data for generating semantic similarity-based product categorization.
[022] At step 204 of the method 200 the one or more hardware processors 104 preprocess the set of product data by removing extraneous text based on a predefined template. Referring now to the example, from the received user query comprising the set of product data extraneous or unwanted texts are removed for
uniform representation of data using the predefined template. The extraneous text may include comma, dots, error code, dropping unnecessary columns, the unmatched product data text from the predefined template and the like. The data obtained from an external source needs to be processed for an algorithm to ingest and work upon.
[023] At step 206 of the method 200 the one or more hardware processors 104 create a dictionary for the set of product data based on a set of attributes comprising a product key with its corresponding product value. Here, dictionary is created for the set of product data of footwear based on the set of attributes. The dictionary data is built using the product category and products as the key-value pair respectively. In the next step some weights are assigned to each key and value based on likelihood of product, for arriving at a suitable relevancy or probability score. This forms the base of contextual understanding of the data to be fed to the model.
[024] At step 208 of the method 200 the one or more hardware processors 104 extract a multi-level contextual data for the set of product data, by assigning a weight to each product data based on likelihood and creating a set of datapoints for each product data. To extract the multi-level contextual data from the set of product data for the said example, initially a similarity match is performed for each product data with a training dataset associated with a pretrained Word2Vec model. Further, a weight is assigned to each product data for the closest similarity semantic match based on (i) a product weight, and (ii) a product category weight.
[025] The product weight is the ratio of count of product distribution to the product as described below in equation 1, Product_weight (pw)
= Count of (Product_distribution/ Product)
equation 1
Referring now to the example then, product weight is described as below in equation 2,
pw = count of ([Ravington Boots, Lace up Boots, Riding Boots,
Ankle Boots,…...]) / Boots equation 2
[026] The product category weight is the ratio of count of product category distribution to the product category as described below in equation 3,
- equation 3 Referring now to the example then, product weight is described as below in equation 4,
pcw = count of ([Formal, Casual, Party, Outdoor, Sports)]/ Women’s
Footwear equation 4
Then, pivoting for the product data (Table 2) is performed to obtain counts of each product data based on the product category, and reindexing the assigned weights (Table 3) to align with the pivot table index.
Table 2 – pivoting the product data
Product Sneakers Slippers Sandals
Product Category Casual 1 1 0
Party 0 0 2
Outdoor 2 0 0
Table 3 – reindexing the product data
Product Product Category Weights
Sneakers Casual 0.80
Slippers Party 0.75
Sandals Outdoor 0.45
Further, the set of data points are created (Table 4) based on the assigned weights using the pivot table index for each product categorization.
Table 4 – Datapoint creation
Product Sneakers Slippers Sandals
Product Category Casual 0.0125 0.0125 0
Party 0 0 0.075
Outdoor 0.050 0 0
[027] The weight is assigned using the product weight and the product category weight as described below in equation 5,
x - yt , d = xt , d x yt ................................... equation 5
In the given equation: ‘t’ is a product and ‘d’ is a product category. x-y does not have any significant meaning. They are representation of any weights or bias added to ‘t’ and ‘d’ in this equation. Weights generally add to the sharpness or steepness of inputs, which leverages semantic similarity. The iteration of weights across the product data depicts the probability scenario of contextual data understanding and the weighted data represents the final formula of contextuality as described in equation 6 and equation 7,
y = f(net) = f(wtx + b) equation 6
g(x) = f(wtxi + b) equation 7
Then,
f(net) = 1 for net ≥ 0,
f(net) = 1 for net ,<0
y = -1,if wi xi+b<0
y = 1,,if wi xi+b+≥0
[028] The product weight is the ratio of total count of product distribution
to the product data. The Word2Vec model is trained by feeding the created
datapoints for similarity mapping. The Word2Vec model fetches random words for
the set of product data text, based on its cosine similarity with the other. The
algorithm is made contextually aware of the fashion footwear data, so that the final
output gives us the correct results. Training inputs are a product of the word vectors
of the text data and the assigned weights. It is to be noted that, if xi is a product,
then wi is a weight associated with it and if yi is the product category, then wi is
the weight associated with it. Inputs to algorithm may be summed up as: ∑ xi.wi
where 0
Documents
Application Documents
| # |
Name |
Date |
| 1 |
202121053177-STATEMENT OF UNDERTAKING (FORM 3) [18-11-2021(online)].pdf |
2021-11-18 |
| 2 |
202121053177-REQUEST FOR EXAMINATION (FORM-18) [18-11-2021(online)].pdf |
2021-11-18 |
| 3 |
202121053177-PROOF OF RIGHT [18-11-2021(online)].pdf |
2021-11-18 |
| 4 |
202121053177-FORM 18 [18-11-2021(online)].pdf |
2021-11-18 |
| 5 |
202121053177-FORM 1 [18-11-2021(online)].pdf |
2021-11-18 |
| 6 |
202121053177-FIGURE OF ABSTRACT [18-11-2021(online)].jpg |
2021-11-18 |
| 7 |
202121053177-DRAWINGS [18-11-2021(online)].pdf |
2021-11-18 |
| 8 |
202121053177-DECLARATION OF INVENTORSHIP (FORM 5) [18-11-2021(online)].pdf |
2021-11-18 |
| 9 |
202121053177-COMPLETE SPECIFICATION [18-11-2021(online)].pdf |
2021-11-18 |
| 10 |
Abstract1.jpg |
2022-02-03 |
| 11 |
202121053177-FORM-26 [20-04-2022(online)].pdf |
2022-04-20 |
| 12 |
202121053177-Power of Attorney [08-12-2022(online)].pdf |
2022-12-08 |
| 13 |
202121053177-Form 1 (Submitted on date of filing) [08-12-2022(online)].pdf |
2022-12-08 |
| 14 |
202121053177-Covering Letter [08-12-2022(online)].pdf |
2022-12-08 |
| 15 |
202121053177-CORRESPONDENCE(IPO)-(WIPO DAS)-20-12-2022.pdf |
2022-12-20 |
| 16 |
202121053177-FORM 3 [30-05-2023(online)].pdf |
2023-05-30 |
| 17 |
202121053177-FER.pdf |
2023-10-05 |
| 18 |
202121053177-OTHERS [02-04-2024(online)].pdf |
2024-04-02 |
| 19 |
202121053177-Information under section 8(2) [02-04-2024(online)].pdf |
2024-04-02 |
| 20 |
202121053177-FER_SER_REPLY [02-04-2024(online)].pdf |
2024-04-02 |
| 21 |
202121053177-DRAWING [02-04-2024(online)].pdf |
2024-04-02 |
| 22 |
202121053177-CORRESPONDENCE [02-04-2024(online)].pdf |
2024-04-02 |
| 23 |
202121053177-CLAIMS [02-04-2024(online)].pdf |
2024-04-02 |
| 24 |
202121053177-US(14)-HearingNotice-(HearingDate-26-09-2025).pdf |
2025-07-23 |
| 25 |
202121053177-Correspondence to notify the Controller [19-09-2025(online)].pdf |
2025-09-19 |
| 26 |
202121053177-Written submissions and relevant documents [06-10-2025(online)].pdf |
2025-10-06 |
| 27 |
202121053177-MARKED COPIES OF AMENDEMENTS [06-10-2025(online)].pdf |
2025-10-06 |
| 28 |
202121053177-FORM 13 [06-10-2025(online)].pdf |
2025-10-06 |
| 29 |
202121053177-AMMENDED DOCUMENTS [06-10-2025(online)].pdf |
2025-10-06 |
Search Strategy
| 1 |
202121053177E_26-09-2023.pdf |