Sign In to Follow Application
View All Documents & Correspondence

Voice Assisted System, Device And Method For Inventory Management

Abstract: A method for managing inventory using voice colloquial messagesis disclosed. The method includes the steps of obtaining (102), by a microphone of a system, an audio feed of a conversation from a user, wherein the recorded audio feed includes at least a detail associated with a transaction, and wherein the transaction is an instance of buying or selling of goods or services; retrieving (104), by a processor of the system using a trained machine learning model, an intent associated with an item and a quantity associated the item, from the obtained detail associated with the transaction; managing (104), by the processor, the inventory based on the retrieved intent by identifying an inventory status to fulfil the retrieved intent.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
02 August 2023
Publication Number
35/2023
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

Divisha Innovations Private Limited
S3, #104, 13th Main Road, HSR Layout 5th Sector, Bengaluru - 560102, Karnataka, India.

Inventors

1. BORA, Anand
203, Manya Arena Apartment, Nyanapanahalli Main Road, Hulimavu, Bengaluru - 560076, Karnataka, India
2. PERUMAL, Inbasekaran
First Floor 1986, C Block, MCECHS Layout, 1A Main, 20 D Cross, Sahakara Nagar, Bengaluru - 560092, Karnataka, India.
3. PULLOT, Shirin
S3, #104, 13th Main Road, HSR Layout 5th Sector, Bengaluru - 560102, Karnataka, India

Specification

Description:TECHNICAL FIELD
[0001] The present invention relates to the field of computing, and more particularly to inventory management.

BACKGROUND
[0002] Inventory management is a discipline primarily about specifying the shape and placement of stocked goods. It is required at different locations within a facility or within many locations of a supply network to precede the regular and planned course of production and stock of materials. Inventory management is the tracking of inventory from manufacturers to warehouses and from these facilities to point of sale.
[0003] A Merchandiser ensures that the shelves of a retail store are stocked with products and displays them accordingly for customers. They also track inventory levels, report any issues or shortages to management and clean up unwanted items from flagrant violations in-store decorum. With the continued move of commerce to digital platforms, merchandisers have a good source of data to make intelligent inventory management decisions. Search engine data (e.g., customer product queries) collected from a retailer website or other digital platform may be analysed to predict the product needs and desires of the customers and may be used to fill inventory gaps when certain products are not sold or carried by the retailer. Unlike retailer websites or digital platforms, physical retail locations of merchandisers are unable to capture and meaningfully leverage such customer query data.
[0004] Most businessmen and traders (specifically in countries like India) have huge issues of handling inventory and stock in their businesses. To address these, they keep human resource which might not stick for long or they buy expensive inventory management software for which there is big learning curve. This voice assisted system will mitigate that problem of dependency and infuse efficient inventory handling for the business
[0005] Traditional inventory management systems face several challenges, including limited real-time visibility, manual data entry errors, language barriers, vulnerability to damage, and inefficient search and retrieval. These challenges can affect businesses across industries, including retailers, wholesalers, manufacturers, and distributors. Small businesses with limited resources may also face difficulties in managing inventory manually.
[0006] Further, modern and available solutions for inventory management systems may require a high level of technical expertise, manual data entry, and extensive training. As a result, many shop owners may not have the necessary resources or knowledge to operate these systems effectively. Moreover, these systems may not support multiple languages or cater to limited literacy or educational background shopkeepers.

SUMMARY
[0007] The present invention relates to the field of computing, and more particularly to inventory management. More specifically, the present invention enables businesses to automate the management of sales and purchase of an inventory using vocal speech commonly used by people. The use of voice assisted technology provides a more easy and efficient way to manage inventory as it reduces the need for manual input and data entry.
[0008] A voice assisted inventory management technology is an improvement over traditional inventory management systems as it provides a more intuitive and user-friendly way to manage inventory. It also improves efficiency as it eliminates the need for manual data entry or typing, allowing for faster and more accurate inventory updates.
[0009] It can be particularly useful in settings such as warehouses, pharmaceutical stores, and distribution centres, where accurate inventory management is critical to the success of the business.
[0010] The present invention is designed to address the problem of inventory management for common Indian shop owners who do not maintain a ledger or find it difficult to use advanced technology or inventory management applications. Existing inventory management systems often require manual data entry and are time-consuming, which can be a significant challenge for small shop owners who have limited resources and personnel. In addition, many shop owners may not have the technical knowledge or proficiency in English to use advanced inventory management applications. Furthermore, this technology addresses the issue of illiteracy among shopkeepers who may not know how to read and write. The use of voice-assisted technology eliminates the need for written records and can be easily operated through voice commands. The system supports around 10 Indian languages, making it accessible to shopkeepers who do not speak or read English.
[0011] Accordingly, aspects of the present invention provide a voice-assisted inventory management system that provides a simple and efficient way for small shop owners in India to manage their inventory goods. This innovative solution addresses the specific challenges faced by these shop owners who struggle with maintaining ledgers and using complex inventory management applications.
[0012] Consequently, the present invention provides a voice assisted system, device and method for inventory management.
[0013] The system offers a range of benefits, including the ability to track goods effortlessly through simple voice commands, eliminating the need for manual data entry and minimizing errors. It supports approximately 10 Indian languages, ensuring accessibility for shopkeepers who are not familiar with English. Additionally, the technology operates solely through voice commands, making it ideal for those who may have limited reading and writing skills.
[0014] By adopting this invention, shop owners gain access to a user-friendly and accessible solution that requires minimal training and technical expertise. The system offers numerous advantages over traditional inventory management methods, such as time savings, reduced risk of stockouts or overstocking, and enhanced accuracy.

BRIEF DESCRIPTION OF DRAWINGS
[0015] Preferred representations of the non-invasive system of the present invention are described in detail below with reference to the drawings wherein:
[0016] FIG. 1 illustrates a method for managing inventory using voice colloquial messages, in accordance with the present invention.
[0017] FIG. 2 illustrates a voice assisted device (200) to manage inventory using voice colloquial messages, in accordance with the present invention.
[0018] FIG. 3A illustrates an example of the training data used to train the voice-based inventory management model, in accordance with the present invention.
[0019] FIG. 3B illustrates an annotated training data used to train the voice-based inventory management model, in accordance with the present invention.
[0020] FIG. 3C illustrates a training pipeline of the voice-based inventory management model, in accordance with the present invention.
[0021] FIG. 4 illustrates a screen shot of a user interface of an implemented syste, in accordance with the present invention
[0022] FIG. 5 illustrates a block diagram of internal and external components of voice assisted device (200) depicted in FIG. 2 and system in general, in according to at least one embodiment, in accordance with the present invention.

DESCRIPTION OF EMBODIMENTS
[0023] The word “exemplary” or “embodiment” is used herein to mean “serving as an example, instance, or illustration.” Any implementation or aspect described herein as “exemplary” or as an “embodiment” is not necessarily to be construed as preferred or advantageous over other aspects of the disclosure. Likewise, the term “aspects” does not require that all aspects of the disclosure include the discussed feature, advantage, or mode of operation.
[0024] Detailed embodiments of the claimed structures and methods are disclosed herein; however, it can be understood that the disclosed embodiments are merely illustrative of the claimed structures and methods that may be embodied in various forms. This invention may, however, be embodied in many different forms and should not be construed as limited to the exemplary embodiments set forth herein. Rather, these exemplary embodiments are provided so that this disclosure will be thorough and complete and will fully convey the scope of this invention to those skilled in the art. In the description, details of well-known features and techniques may be omitted to avoid unnecessarily obscuring the presented embodiments.
[0025] The present invention may be a system, a method, and/or a device at any possible technical detail level of integration. The device may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.
[0026] Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
[0027] The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems and smartphone devices that perform the specified functions or acts or carry out combinations of special purpose hardware, smartphone devices and computer instructions.
[0028] The following described exemplary embodiments provide a system, method and program product for intelligent inventory management based on customer interactions in a physical retail location (e.g., retail store) of a merchandiser. As such, the present embodiment has the capacity to improve the technical field of inventory management by providing a user (e.g., merchandiser) with customer query data from a physical retail location for making intelligent retail inventory decisions. More specifically, an inventory management system may implement one or more voice recording devices in a physical retail location to capture interactions of the customers. Then, the inventory management system may implement natural language processing to analyse the interactions customers to determine at least what product the customer is purchasing and of how much quantity. If a product is out of stock or is not sold by the merchandiser, the inventory management program may store one or more data/metadata associated with the customer interaction and the product in an inventory management database. The merchandiser may access the inventory management database to fill retail gaps by stocking the products that customers have requested at the physical retail location.
[0029] As previously described, with the continued move of commerce to digital platforms, merchandisers have a good source of data to make intelligent inventory management decisions. Search engine data (e.g., customer product queries) collected from a retailer website or other digital platform may be analyzed to predict the product needs and desires of the customers and may be used to fill inventory gaps when certain products are not sold or carried by the retailer. Unlike retailer websites or digital platforms, physical retail locations of merchandisers are unable to capture and meaningfully leverage such customer query data.
[0030] The invention works by providing a simplified and efficient method for retail business owners to manage their inventory through voice based assistance in inventory management. Instead of manually recording purchase details, users can simply use voice commands to log the transaction, such as "Purchased 20 kg rice for 1400 rupees." The system captures essential information, including the intent (purchase or sale), the item (rice in this example), the quantity (20 kg), and optionally, additional details like price, brand, or type mentioned in the command.
[0031] According to one embodiment, a spokenword may be detected. In response, a microphone may record the customer query and the staff response. Then, the microphone may send the dialog to a remote natural language processing service. In one embodiment, the question may be analyzed for product names and the response may be analyzed for key phrases indicating lack of stock. If the response contains the lack of stock phrase, the product name and query metadata (e.g., store location, time of day) may be retained in a database. Thereafter, the database may be analyzed using one or more known techniques used for digital inventory management.
[0032] FIG. 1, in an embodiment, illustrates a method for managing inventory using voice assistance, in accordance with the present invention.
[0033] At step 102, a microphone of a system obtains an audio feed of a conversation from a user. The recorded audio feed includes at least a detail associated with a transaction and the transaction is an instance of buying or selling of goods or services.
[0034] It may be appreciated that the inventory transactions are used to track the quantities and movements of inventory items. Transaction records are created by the applications that interface with the inventory system, such as Purchasing, Receiving, and Task Management.
[0035] In an exemplary embodiment, the audio feed is obtained in a native language of the user, and wherein the audio feed obtained in the native language is pre-processed to convert the native language into a language identifiable language to the processor for retrieving the intent.
[0036] At step 104, a processor of the system retrieves an intent associated with an item and a quantity associated the item using a trained machine learning model. The intent is retrieved from the obtained detail associated with the transaction.
[0037] In an exemplary embodiment, the intent is associated with buying or selling of the item, and wherein the item is selected from goods and services.
[0038] In an exemplary embodiment, the step of retrieving further includes retrieving additional details associated with at least with the intent, the item and the quantity.
[0039] In an exemplary embodiment, the method further includes the steps of converting the obtained detail into a corresponding text, and performing, by utilizing at least one natural language processing and named entity recognizer (NER) functionality, a natural language processing of the corresponding text to determine and retrieve the intent associated with the item and the quantity associated the item.
[0040] In an exemplary embodiment, the step of retrieving further includes: determining whether the audio feed by the user includes a portion of a trigger phrase; causing subsequent audio feed to be digitally recorded, the subsequent user speech being included in the obtained audio feed in accordance with a determination that the audio feed by the user includes the portion of the trigger phrase; and causing speech recognition to be performed to identify a user request from the obtained audio feed. In this exemplary embodiment, identifying the user request comprises: generating text based on the obtained audio feed; performing natural language processing of the text; and determining user intent based on a result of the natural language processing.
[0041] In an exemplary embodiment, the intent associated with the item is retrieved by comparing at least one word from the obtained detail with one or more predefined list of words in a database indicating specific intent associated with a request to purchase or a request to sell.
[0042] In an exemplary embodiment, the trained machine learning model is Spacy or a Natural Language Processing (NLP).
[0043] At step 106, the processor manages the inventory based on the retrieved intent by identifying an inventory status to fulfil the retrieved intent.
[0044] In an exemplary implementation, the invention works by providing a simplified and efficient method for retail business owners to manage their inventory through voice commands. Instead of manually recording purchase details, users can simply use voice commands to log the transaction, such as "Purchased 20 kg rice for 1400 rupees." The system captures essential information, including the intent (purchase or sale), the item (rice in this example), the quantity (20 kg), and optionally, additional details like price, brand, or type mentioned in the command.
[0045] To build a system capable of recognizing these categories, we utilize the spaCy library, which is a free and open-source tool for natural language processing. Within spaCy, we utilize the pre-defined Named Entity Recognizer (NER) functionality, which identifies entities such as names, verbs, organizations, and more.
[0046] For a specific use case, a custom spaCy NER model is trained to identify the required categories based on voice commands. It is worth noting that other models or libraries can also be used for the NER task if desired.
[0047] To train our model to accurately identify the categories from voice commands, a training dataset consisting of similar sentences and examples relevant to a use case is created. This dataset will help the model learn and generalize patterns in the data, enabling it to recognize and extract the relevant information from new voice commands efficiently.
[0048] Step 1: Data preparation: Any example from the training data can be broken down into 6 basic entities as follows:
[0049] Sale/Purchase intent (mandatory): This entity represents the intention behind the action describedin the sentence, indicating whether it is a sale or purchase.
[0050] Item (mandatory): This entity refers to the specific item or product involved in the transaction. It could be any tangible or intangible item, such as a physical product, service, or digital content.
[0051] Quantity (mandatory): This entity denotes the quantity or amount associated with the item mentioned in the sentence. It could be expressed in units, measurements, or any appropriate numerical representation.
[0052] Price/Cost (optional): This entity represents the price or cost associated with the transaction. It indicates the amount of money paid or charged for the item.
[0053] Brand (optional): This entity refers to the specific brand or manufacturer of the item. It helps identify the particular brand associated with the mentioned item.
[0054] Type (optional): This entity provides additional information about the item, specifying its type or category. It can help distinguish between different variations or classifications of the item.
[0055] Retail Store Examples:
Purchased 10kgs of rice
○ Sale/Purchase intent →Purchase ○ Item → Rice ○ Quantity → 10 kgs
Sold 10kgs of Sona Masuri Rice
○ Sale/Purchase intent → Sold ○ Item → Rice ○ Quantity → 10 kgs ○ Type → Sonam masuri
Bought 10kgs of Sona Masuri Brown Rice
○ Sale/Purchase intent → Purchase ○ Item → Rice ○ Quantity → 10 kgs ○ Type → Sonam masuri ○ Type → Brown
Shopped 10kgs of Sona Masuri Brown Rice by Darbar
○ Sale/Purchase intent → Purchase ○ Item → Rice ○ Quantity → 10 kgs ○ Type → Sonam masuri ○ Type → Brown ○ Brand → Darbar
Got 10kgs of Sona Masuri Brown Rice by Darbar at 50 rs/kg
○ Sale/Purchase intent → Purchase ○ Item → Rice ○ Quantity → 10 kgs ○ Type → Sonam masuri ○ Type → Brown ○ Brand → Darbar ○ Price/Cost → 50 rs/kg
[0056] Pharma (Medicine) Sector Examples:
Paid for one vial of methylcobalamin injection
○ Sale/Purchase intent → Purchase ○ Item → methylcobalamin ○ Quantity → one vial
Sold one vial of methylcobalamin injection to dallas pharmaceuticals and formulations
○ Sale/Purchase intent → Sale ○ Item → Methylcobalamin Injection ○ Quantity → One vial ○ Seller → Dallas Pharmaceuticals and Formulations
[0057] To provide a better understanding of the data preparation process, it's important to highlight how the purchase intent and sell intent were determined. In this case, a predefined list of words was created to capture the specific intent expressed in the sentence. This list contained words that indicated whether the sentence was a request to purchase or a request to sell. By analyzing the sentence and matching it with the words in the predefined list, specifically, the model identifies based the data it has experienced/feed with/learned seen during training of the model, the system could accurately identify the intent behind the sentence, whether it was a purchase request or a sell request. This approach ensured that the system could categorize the statements correctly based on the intended action.
[0058] The other three entities, namely the item, seller, and units, are trainable entities that are specific to the use case. In the context of a retail store, the item entity refers to the specific products available in the inventory database. These items need to be identified and extracted from the sentence.
[0059] FIG. 3A illustrates an example of the training data used to train the voice-based inventory management model, in accordance with the present invention. FIG. 3B illustrates an annotated training data used to train the voice-based inventory management model, in accordance with the present invention. FIG. 3C illustrates a training pipeline of the voice-based inventory management model, in accordance with the present invention.
[0060] Similarly, the seller entity represents the vendor or supplier involved in the transaction. This entity should be recognized and extracted from the sentence, capturing the specific seller involved in the purchase or sale.
[0061] Lastly, the unit’s entity is specific to the type of product being bought or sold. It captures the measurement or quantity associated with the item, such as kilograms, litres, or units. It is crucial to ensure that the data preparation process aligns with the specific use case and accurately represents the entities of interest. Collaboration with domain experts can be beneficial to identify the relevant entities and ensure their proper representation in the text data. This ensures that the trained model can effectively understand and extract the required information from the given sentences.
[0062] Using the 3 cases (as shown in FIGs. 3A-3C), a training data is generated. The next step is to annotate the entities in the text.
[0063] Step 2: Annotation:
[0064] In the context of entity labeling, such as the example "sold 10kgs of Sona Masuri Rice," the process involves annotating the text to indicate specific entities and their boundaries. This annotation is crucial for training models to accurately recognize and classify named entities.
[0065] One commonly used format for labeling sequences of tokens is the IOB (Inside, Outside, Beginning) format. This format is widely employed in natural language processing tasks like named entity recognition and part-of-speech tagging.
[0066] In the IOB format, each token in a sequence is assigned a tag that indicates whether it is the beginning, inside, or outside of a named entity. The tags typically follow the pattern "B-TYPE" for the beginning of an entity, "I-TYPE" for the inside of an entity, and "O" for tokens outside any entity. For example, in the sentence "Sold 10kgs of Sona Masuri Rice," the entity "10kgs" would be labeled as "B-QUANTITY," "Sona Masuri Rice" as "B-ITEM," and other tokens like "Sold" would be labeled as "B-SELL_INTENT.
[0067] To annotate the data, you start by manually labelling the entities in a small set of example sentences. This involves identifying and marking the specific words or phrases that represent the entities of interest. To streamline the annotation process, tools like Prodigy by spaCy can be used. Prodigy provides a user-friendly web interface that allows efficient annotation of data. It also offers various techniques like active learning and data augmentation to improve the quality and efficiency of the annotation process.
[0068] Once a sufficient number of sentences are annotated, a model is trained to recognizes named entities using the annotated data in the IOB format.
[0069] Step 3: Training the spaCy NER model:
[0070] During training, the model analyzes the annotated examples to identify patterns and relationships between words and their corresponding entity types. It learns to recognize the specific entities based on the context in which they appear.
[0071] Once the model has been trained on the annotated data, it can be used to automatically annotate new, unseen data by predicting the entities based on the learned patterns. This allows for efficient and automated entity recognition in large datasets without the need for manual annotation.
[0072] While Prodigy by spaCy is a paid tool that facilitates efficient annotation, an open-source alternative like NER-Annotator can also be used. NER-Annotator provides similar functionality, assisting in the effective annotation of data without requiring a financial investment. The config.cfg file is used to specify the settings and hyperparameters for the training process. The important configurations and their values are as follows:
[0073] lang: The language of the model, set to "en" for English.
[0074] pipeline: The pipeline components for the model, which includes the "tok2vec" and "ner" components.
[0075] batch_size: The number of texts processed in one batch during training, set to 1000.
[0076] components.ner.model.architecture: The architecture of the NER model, set to "spacy.TransitionBasedParser.v2".
[0077] training.patience: The number of epochs to wait without improvement in validation loss before early stopping, set to 1600.
[0078] training.max_epochs: The maximum number of epochs to train the model, set to 10.
[0079] training.batcher.size.start: The starting size of the batches, set to 100.
[0080] training.batcher.size.stop: The maximum size of the batches, set to 1000.
[0081] training.batcher.size.compound: The rate at which the batch size increases, set to 1.001.
[0082] training.logger.progress_bar: Whether to display a progress bar during training, set to False.
[0083] training.optimizer.learn_rate: The learning rate for the optimizer, set to 0.001.
[0084] During training, the model learns to recognize the entities in the text based on the patterns and features present in the training data. Once training is complete, the model can be used to predict entities in new, unseen examples.
[0085] Step 4: Model evaluation:
[0086] Once the model has been trained and its performance evaluated using metrics like precision, recall, and F1 score, the next step is to enhance the model's accuracy and overall performance. While these metrics provide a useful initial assessment, they may not fully capture how well the model will perform in real-world scenarios.
[0087] To address this, it is crucial to test the model using diverse evaluation methods that go beyond the training and test data. This helps identify potential limitations and areas for improvement. Evaluating the model on new and unseen data can help uncover issues such as overfitting, where the model performs well on the training data but fails to generalize to new examples.
[0088] Additionally, incorporating human feedback is valuable in refining the model. Human evaluators can provide insights and judgments that go beyond automated metrics, helping to identify areas where the model may still require adjustments or fine-tuning.
[0089] By continuously testing, evaluating, and refining the model using a variety of extrinsic evaluation methods and human feedback, it can be iteratively improved to achieve higher levels of accuracy and performance. This iterative process ensures that the model meets the requirements and performs well in real-world scenarios, considering factors beyond the limitations of the training data alone.
[0090] As shown in the FIG. 4, the model is able to correctly identify the intent, unit, item, seller, and cost. The voice-based inventory management system incorporates advanced natural language processing techniques, allowing for sophisticated understanding and interpretation of voice commands, resulting in accurate and reliable inventory management.
[0091] The model can be trained to handle more complex inventory scenarios, such as tracking batch numbers, expiration dates, or serial numbers, ensuring compliance with regulatory requirements and enabling effective product traceability.
[0092] The system incorporates machine learning algorithms, continually improving its accuracy and performance over time as it learns from user interactions and data patterns.
[0093] The system offers scalability, capable of handling a wide range of inventory sizes and accommodating businesses of various scales, from small retailers to large warehouses.
[0094] FIG. 2 illustrates a voice assisted device (200) to manage inventory using voice colloquial messages, in accordance with the present invention.
[0095] In an embodiment, a voice assisted device (200) to manage inventory using voice colloquial messages is disclosed. The voice assisted device includes a microphone (202) to obtain an audio feed of a conversation from a user, wherein the recorded audio feed includes at least a detail associated with a transaction, and wherein the transaction is an instance of buying or selling of goods or services.
[0096] The voice assisted device also includes one or more processors (204) are configured to retrieve an intent associated with an item and a quantity associated the item from the obtained detail associated with the transaction received from the microphone. The intent is retrieved using a trained machine learning model (206). The intent is associated with buying or selling of the item, and wherein the item is selected from goods and services.
[0097] In an exemplary embodiment, to retrieve the intent, the processor is configured to determines whether the audio feed by the user includes a portion of a trigger phrase; causes, in accordance with a determination that the audio feed by the user includes the portion of the trigger phrase, subsequent audio feed to be digitally recorded, the subsequent user speech being included in the obtained audio feed; and causes speech recognition to be performed to identify a user request from the obtained audio feed.
[0098] In an exemplary embodiment, to identify the user request the processor generates text based on the obtained audio feed; perform natural language processing of the text; and determine user intent based on a result of the natural language processing.
[0099] The one or more processors manages the inventory based on the retrieved intent by identifying an inventory status to fulfil the retrieved intent.
[0100] In an embodiment, a system to manage inventory using voice colloquial messages related to stock handling, the system comprising a device (202) for receiving voice colloquial messages related to stock handling is also provided. However, since the system has all the features of the device (202) except that they are separately implemented and not in a single device per se, the embodiments are not repeated for the brevity of this application.
[0101] Some embodiments provide a device (e.g., device 200) for interacting with a voice based digital assistant. The voice assisted device, includes a housing (e.g., housing), one or more processors (e.g., processor(s) 202) in the housing, memory (e.g., memory) in the housing, the memory coupled to the one or more processors and comprising instructions for automatically identifying and connecting to a digital assistant server without a user having to enter information about (e.g., an internet address for) the server.
[0102] The device also includes a power supply (e.g., a transformer or a battery such as power supply) at least partially within the housing (e.g., with a fold-out plug), a wireless network module (e.g., wireless network module) at least partially within the housing, the wireless network module coupled to the one or more processors, and a human-machine interface (e.g., human machine interface). In some implementations, the wireless network module is configured to utilize any known wireless network protocol such as Bluetooth, WiFi, voice over Internet Protocol (VoIP), infrared, or any other suitable communication protocol.
[0103] In some implementations, natural language processor includes categorization module. In some implementations, the categorization module determines whether each of the one or more terms in a text string (e.g., corresponding to a speech input associated with a digital photograph) is one of an entity, an activity, or a location, as discussed in greater detail below. In some implementations, the categorization module classifies each term of the one or more terms as one of an entity, an activity, or a location.
[0104] In some implementations, the natural language processor passes the structured query (including any completed parameters) to the task flow processing module (“task flow processor”). The task flow processor is configured to perform one or more of: receiving the structured query from the natural language processor, completing the structured query, and performing the actions required to “complete” the user's ultimate request. In some implementations, the various procedures necessary to complete these tasks are provided in task flow models. In some implementations, the task flow models include procedures for obtaining additional information from the user, and task flows for performing actions associated with the actionable intent.
[0105] It may be appreciated by a person skilled in the art that, the system of the present invention can be scaled to persistent/constant listening intelligent systems and devices. Further, this system can be scaled to different kinds of businesses big or small. It can also be appreciated that, the trained machine learning model is from Spacy on Natural Language Processing (NLP). We also claim that same technique built over other NLP systems like NLTK etc will result in infringement of this claim.
[0106] FIG. 5 illustrates a block diagram of internal and external components of a device (200) (for example, smartphones, tablets, or any electronic device having processing capabilities) depicted in FIG. 2 and system in general, in according to at least one embodiment, in accordance with the present invention.
[0107] It should be appreciated that FIG. 5 provides only an illustration of one implementation and does not imply any limitations with regard to the environments in which different embodiments may be implemented. Many modifications to the depicted environments may be made based on design and implementation requirements.
[0108] Data processing system 902, 904 is representative of any electronic device capable of executing machine-readable program instructions. Data processing system may be representative of a smart phone, a computer system, PDA, or other electronic devices. Examples of computing systems, environments, and/or configurations that may represented by data processing system include, but are not limited to, personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, network PCs, minicomputer systems, and distributed cloud computing environments that include any of the above systems or devices.
[0109] User client computer and network server may include respective sets of internal components 902 a, b and external components illustrated in FIG. 5. Each of the sets of internal components 902 a, b includes one or more processors 906, one or more computer-readable RAMs 908 and one or more computer-readable ROMs 910 on one or more buses 912, and one or more operating systems 914 and one or more computer-readable tangible storage devices 916. The one or more operating systems 914, the software program 108 and the inventory management program 110 a in client computer 102, and the inventory management program 110 b in network server 112, may be stored on one or more computer-readable tangible storage devices 916 for execution by one or more processors 906 via one or more RAMs 908 (which typically include cache memory). In the embodiment illustrated in FIG. 5, each of the computer-readable tangible storage devices 916 is a magnetic disk storage device of an internal hard drive. Alternatively, each of the computer-readable tangible storage devices 916 is a semiconductor storage device such as ROM 910, EPROM, flash memory or any other computer-readable tangible storage device that can store a computer program and digital information.
[0110] Each set of internal components 902 a, b also includes a R/W drive or interface 918 to read from and write to one or more portable computer-readable tangible storage devices 920 such as a CD-ROM, DVD, memory stick, magnetic tape, magnetic disk, optical disk or semiconductor storage device. A software program, such as the software program 108 and the inventory management program 110 a, 110 b can be stored on one or more of the respective portable computer-readable tangible storage devices 920, read via the respective R/W drive or interface 918 and loaded into the respective hard drive 916.
[0111] Each set of internal components 902 a, b may also include network adapters (or switch port cards) or interfaces 922 such as a TCP/IP adapter cards, wireless wi-fi interface cards, or 3G or 4G wireless interface cards or other wired or wireless communication links. The software program and the inventory management program a in client computer and the inventory management program 110 b in network server computer can be downloaded from an external computer (e.g., server) via a network (for example, the Internet, a local area network or other, wide area network) and respective network adapters or interfaces 922. From the network adapters (or switch port adaptors) or interfaces 922, the software program 108 and the inventory management program a in client computer and the inventory management program in network server computer are loaded into the respective hard drive 916. The network may comprise copper wires, optical fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers.
[0112] Each of the sets of external components 904 a, b can include a computer display monitor 924, a keyboard 926, and a computer mouse 928. External components 904 a, b can also include touch screens, virtual keyboards, touch pads, pointing devices, and other human interface devices. Each of the sets of internal components 902 a, b also includes device drivers 930 to interface to computer display monitor 924, keyboard 926 and computer mouse 928. The device drivers 930, R/W drive or interface 918 and network adapter or interface 922 comprise hardware and software (stored in storage device 916 and/or ROM 910).
[0113] It is understood in advance that although this disclosure includes a detailed description on cloud computing, implementation of the teachings recited herein are not limited to a cloud computing environment. Rather, embodiments of the present invention are capable of being implemented in conjunction with any other type of computing environment now known or later developed.
, Claims:
1. A method for managing inventory using voice colloquial messages, the method comprising:
obtaining (102), by a microphone of a system, an audio feed of a conversation from a user, wherein the recorded audio feed includes at least a detail associated with a transaction, and wherein the transaction is an instance of buying or selling of goods or services;
retrieving (104), by a processor of the system using a trained machine learning model, an intent associated with an item and a quantity associated the item, from the obtained detail associated with the transaction;
managing (104), by the processor, the inventory based on the retrieved intent by identifying an inventory status to fulfil the retrieved intent.
2. The method as claimed in claim 1, wherein the intent is associated with buying or selling of the item, and wherein the item is selected from goods and services.
3. The method as claimed in claim 1, wherein the retrieving further comprising: retrieving additional details associated at least with the intent, the item and the quantity.
4. The method as claimed in claim 1, wherein the method further comprising:
converting, by the processor, the obtained detail into a corresponding text; and
performing, by the processor, utilizing at least one natural language processing and named entity recognizer (NER) functionality, a natural language processing of the corresponding text to determine and retrieve the intent associated with the item and the quantity associated the item.
5. The method as claimed in claim 1, wherein the step of retrieving further includes:
determining whether the audio feed by the user includes a portion of a trigger phrase;
causing, in accordance with a determination that the audio feed by the user includes the portion of the trigger phrase, subsequent audio feed to be digitally recorded, the subsequent user speech being included in the obtained audio feed; and
causing speech recognition to be performed to identify a user request from the obtained audio feed, wherein identifying the user request comprises:
generating text based on the obtained audio feed;
performing natural language processing of the text; and
determining user intent based on a result of the natural language processing.
6. The method as claimed in claim 1, wherein the intent associated with the item is retrieved by comparing at least one word from the obtained detail with one or more predefined list of words in a database indicating specific intent associated with a request to purchase or a request to sell.
7. The method as claimed in claim 1, wherein the trained machine learning model from Spacy on a Natural Language Processing (NLP).
8. The method as claimed in claim 1, wherein the audio feed is obtained in a native language of the user, and wherein the audio feed obtained in the native language is pre-processed to convert the native language into a language identifiable language to the processor for retrieving the intent.
9. A device (200) to manage inventory using voice colloquial messages, the device comprising:
a microphone (202) to obtain an audio feed of a conversation from a user, wherein the recorded audio feed includes at least a detail associated with a transaction, and wherein the transaction is an instance of buying or selling of goods or services; and
one or more processors (204) configured to:
retrieve, using a trained machine learning model (206), an intent associated with an item and a quantity associated the item, from the obtained detail associated with the transaction received from the microphone, wherein the intent is associated with buying or selling of the item, and wherein the item is selected from goods and services; and
manage the inventory based on the retrieved intent by identifying an inventory status to fulfil the retrieved intent; and
wherein to retrieve the intent, the processor is configured to:
determine whether the audio feed by the user includes a portion of a trigger phrase;
cause, in accordance with a determination that the audio feed by the user includes the portion of the trigger phrase, subsequent audio feed to be digitally recorded, the subsequent user speech being included in the obtained audio feed; and
cause speech recognition to be performed to identify a user request from the obtained audio feed, wherein to identify the user request the processor:
generates text based on the obtained audio feed;
perform natural language processing of the text; and
determine user intent based on a result of the natural language processing.
10. A system to manage inventory using voice colloquial messages, the system comprising a device (202) as claimed in claim 9.

Documents

Application Documents

# Name Date
1 202341051980-STATEMENT OF UNDERTAKING (FORM 3) [02-08-2023(online)].pdf 2023-08-02
2 202341051980-POWER OF AUTHORITY [02-08-2023(online)].pdf 2023-08-02
3 202341051980-FORM FOR STARTUP [02-08-2023(online)].pdf 2023-08-02
4 202341051980-FORM FOR SMALL ENTITY(FORM-28) [02-08-2023(online)].pdf 2023-08-02
5 202341051980-FORM 1 [02-08-2023(online)].pdf 2023-08-02
6 202341051980-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [02-08-2023(online)].pdf 2023-08-02
7 202341051980-EVIDENCE FOR REGISTRATION UNDER SSI [02-08-2023(online)].pdf 2023-08-02
8 202341051980-DRAWINGS [02-08-2023(online)].pdf 2023-08-02
9 202341051980-DECLARATION OF INVENTORSHIP (FORM 5) [02-08-2023(online)].pdf 2023-08-02
10 202341051980-COMPLETE SPECIFICATION [02-08-2023(online)].pdf 2023-08-02
11 202341051980-FORM-9 [04-08-2023(online)].pdf 2023-08-04
12 202341051980-STARTUP [09-11-2023(online)].pdf 2023-11-09
13 202341051980-FORM28 [09-11-2023(online)].pdf 2023-11-09
14 202341051980-FORM 18A [09-11-2023(online)].pdf 2023-11-09
15 202341051980-FER.pdf 2023-12-13
16 202341051980-FER_SER_REPLY [13-06-2024(online)].pdf 2024-06-13
17 202341051980-DRAWING [13-06-2024(online)].pdf 2024-06-13
18 202341051980-CORRESPONDENCE [13-06-2024(online)].pdf 2024-06-13
19 202341051980-CLAIMS [13-06-2024(online)].pdf 2024-06-13
20 202341051980-FORM-26 [14-06-2024(online)].pdf 2024-06-14
21 202341051980-US(14)-HearingNotice-(HearingDate-04-08-2025).pdf 2025-07-07
22 202341051980-Correspondence to notify the Controller [30-07-2025(online)].pdf 2025-07-30
23 202341051980-FORM-26 [04-08-2025(online)].pdf 2025-08-04
24 202341051980-Written submissions and relevant documents [19-08-2025(online)].pdf 2025-08-19
25 202341051980-Annexure [19-08-2025(online)].pdf 2025-08-19

Search Strategy

1 202341051980E_30-11-2023.pdf