Sign In to Follow Application
View All Documents & Correspondence

Methods And Systems To Ship Products Together From Warehouses

Abstract: The present disclosure provides a system and method for optimized dispatch plan for retail network. The method includes receiving a product order from a customer through a user interface of a computing device, predicting a repeat propensity score associated with the customer for making another order, determining shipping constraints associated with one or more ordered products, and estimating an optimized dispatch plan based on the predicted propensity score and the determined shipping constraints.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
31 January 2023
Publication Number
31/2024
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

JIO PLATFORMS LIMITED
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.

Inventors

1. KUMAR, Akansha
F1302, Aparna Hill Park Lake Breeze, PJR Enclave Road, Chandanagar, Hyderabad - 500050, Telangana, India.
2. J, Sai Krishna
Plot no. 217, Flat no. 101, Sri Satya Sai Residency, Aditya Nagar, Kukatpally, Hyderabad - 500072, Telangana, India.
3. PEDIREDLA, Ravi Sankar
Flat No 1/A, Raja Homes, Vandanapuri Colony, Beeramguda, Hyderabad - 502032, Telangana, India.
4. SURYAWANSHI, Pravin
C-8, 3rd Floor, Shradhamata Co-op Housing Society, Jogeshwari - Vikhroli Link Rd, Tirandaz, Powai, Mumbai - 400076, Maharashtra, India.

Specification

Description:RESERVATION OF RIGHTS
A portion of the disclosure of this patent document contains material, which is subject to intellectual property rights such as, but are not limited to, copyright, design, trademark, Integrated Circuit (IC) layout design, and/or trade dress protection, belonging to Jio Platforms Limited (JPL) or its affiliates (hereinafter referred as owner). The owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all rights whatsoever. All rights to such intellectual property are fully reserved by the owner.

FIELD OF DISCLOSURE
[0001] The embodiments of the present disclosure generally relate to shipping of products from warehouses. In particular, the present disclosure relates to an optimized dispatch system for shipping products from a warehouse using artificial intelligence and machine learning based architecture.

BACKGROUND OF DISCLOSURE
[0002] The following description of related art is intended to provide background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section be used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of prior art.
[0003] E-commerce retail plays a very vital role in today’s retail business due to the convenience that it offers to customers. As a result of this convenience, customers are increasingly ordering more and more products from e-commerce sites. These products may be ordered together in one order, or the customer may place multiple orders for different products after certain time intervals. This poses a challenge to balance product selection and delivery speed on the one hand and profit margin on the other. Many times, different orders from the same customer are shipped in different packages and delivered at different time periods, leading to an increase in the shipment cost and also an increase in the usage of packing materials.
[0004] There is, therefore, a need in the art to provide a method and a system that can overcome the shortcomings of the existing prior arts.

SUMMARY
[0005] This section is provided to introduce certain objects and aspects of the present disclosure in a simplified form that are further described below in the detailed description. This summary is not intended to identify the key features or the scope of the claimed subject matter.
[0006] In an aspect, the present disclosure relates to a system for estimating an optimized dispatch plan for a warehouse network or a retail network, said system including one or more processors and a memory operatively coupled to the one or more processors, where the memory includes processor-executable instructions, which on execution, cause the one or more processors to receive an order from a customer through a user interface of a computing device, predict a repeat propensity score associated with the customer for making another order, and estimate the optimized dispatch plan based on the predicted repeat propensity score.
[0007] In an embodiment, the one or more processors determine whether the predicted repeat propensity score is more than a threshold level, and wait for a predetermined time for another order from the customer if the predicted repeat propensity score is more than the threshold level. In an embodiment, the predetermined time is twelve hours.
[0008] In an embodiment, a set of logistical constraints associated with the ordered products are considered to estimate the optimized dispatch plan. In an embodiment, the set of logistical constraints include at least one of product availability, service level agreements, vendor quota, vendor capacity, packaging compatibility, dangerous item, and fragile item.
[0009] In another aspect, the present disclosure relates to a method for estimating an optimized dispatch plan for a retail network, where the method includes receiving, by one or more processors, a product order from a customer through a user interface of a computing device, predicting, by the one or more processors, a repeat propensity score associated with the customer for making another order, determining, by the one or more processors, dispatch constraints associated with the product order, and estimating, by the one or more processors, the optimized dispatch plan based at least on the predicted repeat propensity score and the determined dispatch constraints.
[0010] In an embodiment, the method includes waiting, by the one or more processors, for a predetermined time for the customer to make another order if the predicted repeat propensity score is more than a predetermined threshold.
[0011] In an embodiment, the method includes dispatching, by the one or more processors, products together when there are no dispatch constraints associated with the product order, and dispatching, by the one or more processors, products separately when there are dispatch constraints associated with the product order.
[0012] In another aspect, the present disclosure relates to a user equipment including one or more processors, and a memory coupled to the one or more processors, where the memory includes processor-executable instructions, which on execution, cause the one or more processors to transmit one or more product orders to a system to enable the system to process the one or more product orders and allocate an optimized dispatch plan, where the optimized dispatch plan is transmitted to another user equipment for execution.

OBJECTS OF THE PRESENT DISCLOSURE
[0013] Some of the objects of the present disclosure, which at least one embodiment herein satisfies are as listed herein below.
[0014] An object of the present disclosure is to provide an optimized shipping plan minimizing the total shipping cost, and at the same time satisfying the logistical constraints such as product availability, service level agreements, vendor quota and capacity, packaging compatibility, and dangerous and fragile constrained items.

BRIEF DESCRIPTION OF DRAWINGS
[0015] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes the disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0016] FIG. 1 illustrates an exemplar y network architecture (100) in which or with which a proposed system may be implemented, in accordance with an embodiment of the present disclosure.
[0017] FIG. 2 illustrates an exemplary representation (200) of the proposed system for estimating an optimized shipping plan, in accordance with an embodiment of the present disclosure.
[0018] FIG. 3 illustrates an exemplary architecture (300) in which or with which embodiments of the present disclosure may be implemented.
[0019] FIG. 4 illustrates an operational sequence flow (400) for estimating the optimized shipping plan, in accordance with an embodiment of the present disclosure.
[0020] FIG. 5 illustrates an example method (500) for estimating the optimized shipping plan in a retail network, in accordance with an embodiment of the present disclosure.
[0021] FIG. 6 illustrates an exemplary computer system (600) in which or with which embodiments of the present disclosure may be implemented.
[0022] The foregoing shall be more apparent from the following more detailed description of the disclosure.

DETAILED DESCRIPTION OF DISCLOSURE
[0023] In the following description, for the purposes of explanation, various specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter can each be used independently of one another or with any combination of other features. An individual feature may not address all of the problems discussed above or might address only some of the problems discussed above. Some of the problems discussed above might not be fully addressed by any of the features described herein.
[0024] The ensuing description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth.
[0025] Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments.
[0026] Also, it is noted that individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function.
[0027] The word “exemplary” and/or “demonstrative” is used herein to mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive—in a manner similar to the term “comprising” as an open transition word—without precluding any additional or other elements.
[0028] Reference throughout this specification to “one embodiment” or “an embodiment” or “an instance” or “one instance” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
[0029] The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
[0030] The present disclosure provides a robust and effective solution for estimating an optimized dispatch plan for products ordered by a customer. The solution provides an optimized plan to dispatch one or more products ordered by the same customer within a particular period. The disclosed solution provides an optimized dispatch plan based on estimating a repeat propensity score i.e., the probability that a customer is going to make a repeat purchase in the next few hours, for example, without limitations, a predetermined time of twelve hours followed by a recommendation for optimal shipping plan based on the items ordered. The disclosed solution uses an optimization method (mixed integer linear programming or similar) to recommend the optimal shipping plan minimizing the total shipping cost that considers at least the following logistical constraints:
1. Product availability
2. Service level agreements
3. Vendor quota and capacity
4. Packaging compatibility
5. Dangerous and fragile item.
[0031] The disclosed solution may also be used in various dispatch centres such as, without limitations, warehouse, storage places, etc.
[0032] Embodiments of the present disclosure relate to estimating an optimized dispatch plan in a warehouse network. In particular, a system may be provided for optimizing dispatch plans based on number of orders placed by a user. In an embodiment, the present disclosure relates to product warehouse environments requiring an optimized dispatch plan when a same customer orders more than one product within a specific period.
[0033] In accordance with the embodiments described herein, a user (e.g., a customer or a buyer) may interact with a computing device connected to a network to select and buy certain products that interest him or her. For example, without limitations, the computing device may be a computer system, a laptop, a mobile device or a tablet.
[0034] Accordingly, the present disclosure allows estimating a repeat propensity score related to a customer i.e., the probability that the customer is going to make a repeat purchase in the next few hours, for example, without limitations, twelve hours, allocates a dispatch time slot based on the estimated probability, and recommends an optimal shipping plan minimizing the total shipping cost.
[0035] Further, the disclosed solution may use an artificial intelligence (AI) triggered system for estimating the optimized dispatch plan. A machine learning (ML) model may be used to estimate the repeat propensity score. In some embodiments, the repeat propensity score may be estimated using an Extreme Gradient Boosting (XGBoost) algorithm. An AI engine/module may be employed for estimating the optimized dispatch plan considering various constraints related to shipping such as, without limitations, product availability, service level agreements, vendor quota and capacity, packaging compatibility, and dangerous and fragile item constraints. For example, without limitations, Mixed Integer Linear Programming (MILP) optimization model may be used to estimate the optimized dispatch plan upon estimating the repeat propensity score.
[0036] The optimized dispatch plan includes recommendation containing details regarding, but not limited to, what products should be packed together, the type of packaging that needs to be used, a third-party logistics vendor via whom the shipment should be dispatched, and the customer to whom it should be dispatched, etc.
[0037] Other like benefits and advantages are provided by the disclosed solution, which will be discussed in detail throughout the disclosure.
[0038] The various embodiments throughout the disclosure will be explained in more detail with reference to FIGs. 1-6.
[0039] FIG. 1 illustrates an exemplary network architecture (100) in which or with which embodiments of the present disclosure may be implemented.
[0040] Referring to FIG. 1, the network architecture (100) may include at least one computing device (104) operable by a user (102) deployed in an environment. In an embodiment, the computing device (104) may interoperate with any other computing device (not shown) that may be present in the network architecture (100). In an embodiment, the computing device (104) may be referred to as a user equipment (UE). A person of ordinary skill in the art will appreciate that the terms “computing device(s)” and “UE” may be used interchangeably throughout the disclosure.
[0041] In an embodiment, the computing device (104) may include, but is not limited to, a handheld wireless communication device (e.g., a mobile phone, a smart phone, a phablet device, and so on), a wearable computer device (e.g., a head-mounted display computer device, a head-mounted camera device, a wristwatch computer device, and so on), a Global Positioning System (GPS) device, a laptop computer, a tablet computer, or another type of portable computer, a media playing device, a portable gaming system, and/or any other type of computer device (104) with wireless communication capabilities, and the like. In an embodiment, the computing device (104) may include, but is not limited to, any electrical, electronic, electro-mechanical, or an equipment, or a combination of one or more of the above devices such as virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device, wherein the computing device (104) may include one or more in-built or externally coupled accessories including, but not limited to, a visual aid device such as camera, audio aid, a microphone, a keyboard, and input devices for receiving input from a user (102) such as touch pad, touch enabled screen, electronic pen, and the like.
[0042] In an embodiment, the computing device (104) may include one or more of the following components: sensor, radio frequency identification (RFID) technology, GPS technology, mechanisms for real-time acquisition of data, passive or interactive interface, mechanisms of outputting and/or inputting sound, light, heat, electricity, mechanical force, chemical presence, biological presence, location, time, identity, other information, or any combination thereof.
[0043] A person of ordinary skill in the art will appreciate that the computing device (104) may include, but not be limited by, intelligent, multi-sensing, network-connected devices, that can integrate seamlessly with each other and/or with a central server or a cloud-computing system or any other device that is network-connected.
[0044] A person of ordinary skill in the art will appreciate that the computing device or UE (104) may not be restricted to the mentioned devices and various other devices may be used.
[0045] Referring to FIG. 1, the computing device (104) may communicate with a system (110) through a network (106). In an embodiment, the network (106) may include at least one of a Fourth Generation (4G) network, a Fifth Generation (5G) network, or the like. The network (106) may enable the computing device (104) to communicate between devices (not shown) and/or with the system (110). In an exemplary embodiment, the network (106) may be implemented as, or include, any of a variety of different communication technologies such as a wide area network (WAN), a local area network (LAN), a wireless network, a mobile network, a Virtual Private Network (VPN), the Internet, the Public Switched Telephone Network (PSTN), or the like.
[0046] Referring to FIG. 1, the system (110) may include artificial intelligence (AI) engines/modules (108) and an optimization engine (112) in which or with which the embodiments of the present disclosure may be implemented. In particular, the system (110), and as such, the AI module (108) and the optimization engine (112) facilitates estimating the optimized dispatch plan in the network architecture (100) based on the number of orders placed by a user/customer (102) during a particular time period and the logistical constraints associated with the placed orders. The network (100) for example, without limitation, may include a communication network within a warehouse, factory, storage facility, etc. Further, the system (110) may be operatively coupled to a server (114).
[0047] In accordance with an embodiment of the present disclosure, a user (102) for example, a customer or a buyer, may select an item (i.e. product) to be purchased using the computing device (104) and may place an order for the same. The user (102) may use any computing device (104) to place the order. The order received from the user (102) is communicated to the system (110) through the network (106). The system (110) includes the AI engine (108) for predicting the customer propensity score and an optimize engine, i.e. optimization engine (112) for estimating the best dispatch plan. The propensity score may be estimated by extreme gradient boosting algorithm. The optimize engine (112) includes Mixed Integer Linear Programming (MILP) optimization model for estimating the best dispatch plan. The estimated plan may be communicated to a warehouse operations manager or a supply chain logistics manager.
[0048] Although FIG. 1 shows exemplary components of the network architecture (100), in other embodiments, the network architecture (100) may include fewer components, different components, differently arranged components, or additional functional components than depicted in FIG. 1. Additionally, or alternatively, one or more components of the network architecture (100) may perform functions described as being performed by one or more other components of the network architecture (100).
[0049] FIG. 2 illustrates an exemplary representation (200) of the system (110), in accordance with embodiments of the present disclosure.
[0050] For example, the system (110) may include one or more processor(s) (202). The one or more processor(s) (202) may be implemented as one or more microprocessors, microcomputers, microcontrollers, edge or fog microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that process data based on operational instructions. Among other capabilities, the one or more processor(s) (202) may be configured to fetch and execute computer-readable instructions stored in a memory (204) of the system (110). The memory (204) may be configured to store one or more computer-readable instructions or routines in a non-transitory computer readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory (204) may comprise any non-transitory storage device including, for example, volatile memory such as Random-Access Memory (RAM), or non-volatile memory such as Electrically Erasable Programmable Read-only Memory (EPROM), flash memory, and the like.
[0051] In an embodiment, the system (110) may include an interface(s) (206). The interface(s) (206) may comprise a variety of interfaces, for example, interfaces for data input and output devices, referred to as input/output (I/O) devices, storage devices, and the like. The interface(s) (206) may facilitate communication for the system (110). The interface(s) (206) may also provide a communication pathway for one or more components of the system (110). Examples of such components include, but are not limited to, processing unit/engine(s) (208) and a database (210).
[0052] The processing unit/engine(s) (208) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing engine(s) (208). In examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processing engine(s) (208) may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processing engine(s) (208) may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the machine-readable storage medium may store instructions that, when executed by the processing resource, implement the processing engine(s) (208). In such examples, the system (110) may include the machine-readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to the system (110) and the processing resource. In other examples, the processing engine(s) (208) may be implemented by electronic circuitry. In an aspect, the database (210) may comprise data that may be either stored or generated as a result of functionalities implemented by any of the components of the processor (202) or the processing engines (208).
[0053] In an embodiment, the processing engine (208) may include engines that receive data from one or more computing device via a network such as the computing device (104) via the network (106) (e.g., via the Internet) of FIG. 1, to index the data, to analyse the data, and/or to generate statistics based on the analysis or as part of the analysis. In an embodiment, the analysed data may be stored at the database (210). In an embodiment, the processing engine (208) may include one or more modules/engines such as, but not limited to, an acquisition engine (212), one or more AI engine/module (214), an optimization engine/module (218), and other engine(s) (216). A person of ordinary skill in the art will understand that the AI engine(s)/module (214) may be similar in its functionality with the AI engine/modules (108) of FIG. 1 and the optimization module (218) may be similar in its functionality with the optimize engine (112) of FIG. 1, and hence, may not be described in detail again for the sake of brevity.
[0054] Referring to FIG. 2, the database (210) may store the data, i.e., a set of data parameters associated customer orders, inventory details, vendor details, SLAs, dispatch plans, etc. to enable the optimization module (218) to estimate the optimized shipping plan.
[0055] By way of example but not limitation, the one or more processor(s) (202) may receive customer propensity score and details related to logistics constraint as inputs.
[0056] In an embodiment, the one or more processor(s) (202) of the system (110) may cause the acquisition engine (212) to extract the set of data parameters from the database (210) for enabling prediction of customer propensity score which is further used by the optimization engine (218) to estimate the optimized shipment plan. In an embodiment, the AI engine (214) may utilise one or more machine learning models to pre-process the set of data parameters. In an embodiment, results of the pre-processing or analysis may thereafter be transmitted back to the computing device (104), to other devices, to a server providing a web page to a user (102) of the computing device (104), or to other non-device entities.
[0057] In an embodiment, based on the pre-processing, the one or more processor(s) (202) may cause the AI engine (214) to predict a propensity score for a customer i.e., the probability that a customer will place another order within a specific time period.
[0058] A person of ordinary skill in the art will appreciate that the exemplary representation (200) may be modular and flexible to accommodate any kind of changes in the system (110). In an embodiment, the data may get collected meticulously and deposited in a cloud-based data lake to be processed to extract actionable insights.
[0059] FIG. 3 illustrates an exemplary architecture (300) in which or with which embodiments of the present disclosure may be implemented.
[0060] Referring to FIG. 3, the exemplary architecture (300) comprises database, computing system, machine learning module, display system, and the like. For example, the exemplary architecture (300) comprises a database (302), an optimizer (304), a display system (306), and a user (308). The database (302) includes multiple data sources such as, without limitations, customer orders, inventory data, vendor details, service level agreements (SLAs), and dispatch plans based on the data. The fulfilment optimizer (304) is similar in its functionality to the optimization engine (112) of FIG. 1 or the optimization module (218) of FIG. 2. The optimizer (304) takes as inputs the data from the database (302) along with the customer propensity score i.e., the probability level for a customer to place another order within next few hours e.g. twelve hours and evaluates a display plan as displayed on the display system (306). The display system (306) provides a user (308) such as a warehouse manager or a logistics manager to plan the shipping of the products ordered by the customer.
[0061] Referring to FIG. 3, the display system (306) displays the optimized dispatch or shipment plan i.e., the recommendation given by optimizer (304) that contains details regarding what products should be packed together, the type of packaging that needs to be used, the third-party logistics vendor via whom the shipment should be dispatched and the customer to whom it should be dispatched. The displayed data includes fields such as, without limitations, dispatch slot, order ID, shipment ID, stock keeping unit (SKU) number, quantity, packing type required for the product, customer ID and dispatch vendor. The user (308) may use the plan for shipping the ordered products accordingly, thereby reducing supply chain cost, package cost, and also carbon footprint arising out of using extra packing material.
[0062] Referring to the FIG. 3, the optimizer (304) includes the machine learning algorithm to predict the customer propensity score and the optimization algorithm to recommend the least cost shipping plan.
[0063] A person of ordinary skill in the art will appreciate that the architecture (300) may be modular and flexible to accommodate any kind of changes. Although FIG. 3 shows exemplary components of the architecture (300), in other embodiments, the architecture (300) may include fewer components, different components, differently arranged components, or additional functional components than depicted in FIG. 3. Additionally, or alternatively, one or more components of the architecture (300) may perform functions described as being performed by one or more other components of the architecture (300).
[0064] FIG. 4 illustrates an operational sequence flow (400) for estimating the optimized shipping plan, in accordance with an embodiment of the present disclosure.
[0065] Referring to FIG. 4, as a first step (402), customer order details are received by the system. Then, the AI model predicts (404) the repeat purchase propensity score i.e., the probability of the same customer placing another order within the next 12 hours. If the said probability is less than a pre-set threshold value (406), the dispatch by date or the dispatch time slot for the product(s) in the order is allocated straightaway (408). In case the probability is high (414), the system waits for 12 hours (416). Once 12 hours have elapsed, it will allocate the dispatch by date for all orders the customer might have placed within that 12-hour span (418). Then the system will check for other constraints such as, dangerous or fragile constraints associated with the ordered goods (410). Based on the constraints the shipment plan will be generated (412) such that wherever possible, multiple items ordered by the same customer will be combined in the same shipment if there are no constraints on the ordered goods. On the other hand if certain constraints are present the items will be shipped separately.
[0066] FIG. 5 illustrates an example method (500) for estimating optimized shipping plan in a retail network, in accordance with an embodiment of the present disclosure.
[0067] At step 502, the method (500) may include receiving an order from a customer and predicting, at step 504, a probability that the customer will place another order within a defined time limit i.e., within 12 hours. The method (500) further proceeds with checking, at step 506, whether the probability of placing a next order by the customer is low. The determination is made with respect to a preset threshold. If the probability of next order is low then the method (500) proceeds with allocating dispatch slot at step 508 and proceeds with dispatching the order. On the other hand, if the probability of placing the next order by the customer is high i.e., more than the preset threshold then the method (500) proceeds with waiting for a 12 hour period at step 510. Upon completion of the 12 hour period, the orders made by the customer within the 12 hour period are allocated a dispatch time at step 508. Further, at steps 512, 516, the orders are checked for the number of products and their associated shipping constraints. If there are shipping constraints then the orders are dispatched separately at step 514. On the other hand, if there are no shipping constraints the orders are dispatched together in the same packing at step 518.
[0068] FIG. 6 illustrates an exemplary computer system (600) in which or with which embodiments of the present disclosure may be utilized. As shown in FIG. 6, the computer system (600) may include an external storage device (610), a bus (620), a main memory (630), a read-only memory (640), a mass storage device (650), communication port(s) (660), and a processor (670). A person skilled in the art will appreciate that the computer system (600) may include more than one processor and communication ports. The processor (670) may include various modules associated with embodiments of the present disclosure. The communication port(s) (660) may be any of an RS-232 port for use with a modem-based dialup connection, a 10/100 Ethernet port, a Gigabit or 10 Gigabit port using copper or fiber, a serial port, a parallel port, or other existing or future ports. The communication port(s) (660) may be chosen depending on a network, such a Local Area Network (LAN), Wide Area Network (WAN), or any network to which the computer system (600) connects. The main memory (630) may be random access memory (RAM), or any other dynamic storage device commonly known in the art. The read-only memory (640) may be any static storage device(s) including, but not limited to, a Programmable Read Only Memory (PROM) chips for storing static information e.g., start-up or basic input/output system (BIOS) instructions for the processor (670). The mass storage device (650) may be any current or future mass storage solution, which may be used to store information and/or instructions.
[0069] The bus (620) communicatively couples the processor (670) with the other memory, storage, and communication blocks. The bus (620) can be, e.g. a Peripheral Component Interconnect (PCI) / PCI Extended (PCI-X) bus, Small Computer System Interface (SCSI), universal serial bus (USB), or the like, for connecting expansion cards, drives, and other subsystems as well as other buses, such a front side bus (FSB), which connects the processor (670) to the computer system (600).
[0070] Optionally, operator and administrative interfaces, e.g. a display, keyboard, and a cursor control device, may also be coupled to the bus (620) to support direct operator interaction with the computer system (600). Other operator and administrative interfaces may be provided through network connections connected through the communication port(s) (660). In no way should the aforementioned exemplary computer system (600) limit the scope of the present disclosure.
[0071] Thus, the present disclosure enables generation of a cost-effective shipment plan for shipping products from a warehouse or a factory or a storage setting when the same customer places multiple orders. The present disclosure facilitates an optimized shipping plan considering reducing package cost at the same time saving products with shipping constraints.
[0072] While considerable emphasis has been placed herein on the preferred embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from the principles of the disclosure. These and other changes in the preferred embodiments of the disclosure will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter to be implemented merely as illustrative of the disclosure and not as limitation.

ADVANTAGES OF THE PRESENT DISCLOSURE
[0073] The present disclosure provides solution to combine products in a single shipment/package thereby lowering shipping cost.
[0074] The present disclosure provides a method to reduce the usage of packing material and thereby reducing the cost of packing material.
[0075] The present disclosure reduces the last mile delivery cost since fewer number of deliveries are needed due to combined packaging.
[0076] The present disclosure also reduces the use of packing material and number of deliveries resulting in a smaller carbon footprint for the organization.
[0077] The present disclosure achieves higher customer satisfaction by providing multiple products in the same shipment.
, Claims:1. A system (110) for estimating an optimized dispatch plan for a warehouse network (106), said system (110) comprising:
one or more processors (202); and
a memory (204) operatively coupled to the one or more processors (202), wherein the memory (204) comprises processor-executable instructions, which on execution, cause the one or more processors (202) to:
receive an order from a customer through a user interface of a computing device (104);
predict a repeat propensity score associated with the customer for making another order; and
estimate the optimized dispatch plan based on the predicted repeat propensity score.
2. The system (110) as claimed in claim 1, wherein the memory (204) comprises processor-executable instructions, which on execution, cause the one or more processors (202) to determine whether the predicted repeat propensity score is more than a threshold level.
3. The system (110) as claimed in claim 2, wherein the memory (204) comprises processor-executable instructions, which on execution, cause the one or more processors (202) to wait for a predetermined time for another order from the customer if the predicted repeat propensity score is more than the threshold level.
4. The system (110) as claimed in claim 3, wherein the predetermined time is twelve hours.
5. The system (110) as claimed in claim 3, wherein the memory (204) comprises processor-executable instructions, which on execution, cause the one or more processors (202) to determine a set of logistical constraints associated with the orders and estimate the optimized dispatch plan based on the set of logistical constraints.

6. The system (110) as claimed in claim 5, wherein the set of logical constraints comprises at least one of: product availability, service level agreements, vendor quota, vendor capacity, packaging compatibility, dangerous item, and fragile item.
7. A method (500) for estimating an optimized dispatch plan for a retail network, said method (500) comprising:
receiving (502), by one or more processors (202), a product order from a customer through a user interface of a computing device (104);
predicting (504), by the one or more processors (202), a repeat propensity score associated with the customer for making another order;
determining (516), by the one or more processors (202), dispatch constraints associated with the product order; and
estimating, by the one or more processors (202), the optimized dispatch plan based at least on the predicted repeat propensity score and the determined dispatch constraints.
8. The method (500) as claimed in claim 7, comprising waiting (510), by the one or more processors (202), for a predetermined time for the customer to make another order if the predicted repeat propensity score is more than a predetermined threshold.
9. The method (500) as claimed in claim 7, comprising dispatching, by the one or more processors (202), products together when there are no dispatch constraints associated with the product order.
10. The method (500) as claimed in claim 7, comprising dispatching, by the one or more processors (202), products separately when there are dispatch constraints associated with the product order.
11. A user equipment (104), comprising:
one or more processors; and
a memory coupled to the one or more processors, wherein the memory includes processor-executable instructions, which on execution, cause the one or more processors to:
transmit one or more product orders to a system to enable the system to process the one or more product orders and allocate an optimized dispatch plan, wherein the optimized dispatch plan is transmitted to another user equipment for execution.

Documents

Application Documents

# Name Date
1 202321006111-STATEMENT OF UNDERTAKING (FORM 3) [31-01-2023(online)].pdf 2023-01-31
2 202321006111-REQUEST FOR EXAMINATION (FORM-18) [31-01-2023(online)].pdf 2023-01-31
3 202321006111-POWER OF AUTHORITY [31-01-2023(online)].pdf 2023-01-31
4 202321006111-FORM 18 [31-01-2023(online)].pdf 2023-01-31
5 202321006111-FORM 1 [31-01-2023(online)].pdf 2023-01-31
6 202321006111-DRAWINGS [31-01-2023(online)].pdf 2023-01-31
7 202321006111-DECLARATION OF INVENTORSHIP (FORM 5) [31-01-2023(online)].pdf 2023-01-31
8 202321006111-COMPLETE SPECIFICATION [31-01-2023(online)].pdf 2023-01-31
9 202321006111-FORM-8 [01-02-2023(online)].pdf 2023-02-01
10 202321006111-ENDORSEMENT BY INVENTORS [28-02-2023(online)].pdf 2023-02-28
11 Abstract1.jpg 2023-04-29
12 202321006111-FER.pdf 2025-08-08
13 202321006111-FER_SER_REPLY [14-10-2025(online)].pdf 2025-10-14
14 202321006111-DRAWING [14-10-2025(online)].pdf 2025-10-14
15 202321006111-CORRESPONDENCE [14-10-2025(online)].pdf 2025-10-14
16 202321006111-FORM 3 [08-11-2025(online)].pdf 2025-11-08

Search Strategy

1 202321006111_SearchStrategyNew_E_202321006111E_09-04-2025.pdf