Abstract: Conventionally, determination of number of tills to be opened on any day has based on the user’s judgement which is a very reactive process and impacts costs associated with infrastructure and manpower. Embodiments of the present disclosure provide systems and methods that implement combinatorial models for estimating/predicting number of tills to be operated in an entity for a given time duration. More specifically, a first combinatorial model is applied on historical service time and basket size pertaining to historical transactions of a plurality of users to obtain a predicted service time rate. Similarly, a second combinatorial model is applied on predicted service time rate and a predicted arrival rate for estimating number of tills to be operated at a given time duration in the entity. Implementation of combinatorial models enables efficient prediction/estimation of tills wherein non-uniformity of data being observed, and control of service time buffer for enhanced performance accuracy.
Claims:
1. A processor implemented method, comprising:
obtaining, via one or more hardware processors, an input comprising an arrival rate pattern of a plurality of users, wherein the arrival rate is obtained for a plurality of instances, each instance from the plurality of instance is of a specific time interval corresponding to a time duration, wherein the arrival rate of the plurality of users corresponds to one or more departments in an entity (202);
applying, via the one or more hardware processors, a machine learning model on the arrival rate pattern of the plurality of users to obtain a plurality of arrival rate models, wherein each of the plurality of arrival rate models corresponds to a sub-duration of the time duration (204);
predicting, via the one or more hardware processors, arrival rate for the time duration specific to one or more tills operating at the one or more departments in the entity using one or more arrival rate models from the plurality of arrival rate models by applying the plurality of arrival rate models on historical transactional data obtained from the one or more tills operating at the one or more departments in the entity (206);
obtaining, via the one or more hardware processors, historical data comprising (i) historical service time and (ii) basket size pertaining to historical transactions of the each of the plurality of users, the historical transactions being comprised in the historical data (208);
applying, via the one or more hardware processors, a combinatorial model on the historical data to obtain a service time rate for each of the plurality of users (210); and
applying, via the one or more hardware processors, one or more models on the predicted arrival rate and the predicted service time rate controlling a waiting time in a queue to obtain an estimation of a number of tills to be operated at the one or more departments in the entity for a given time duration (212).
2. The processor implemented method of claim 1, wherein the service time rate is based on a service time buffer, and wherein the service time buffer is indicative of a time difference between at least two transactions.
3. The processor implemented method of claim 1, wherein the waiting time in the queue is less than or equal to a predefined waiting time period.
4. The processor implemented method of claim 1, wherein the combinatorial model comprises a machine learning model and a probabilistic model.
5. The processor implemented method of claim 1, wherein the one or more models comprise at least one of a queuing model and a simulation model, and wherein the queuing model comprises one of a multi-server-single queue model or a multi-server-multi queue model.
6. The processor implemented method of claim 1, further comprising predicting, for an intraday, a number of tills to be operated at the one or more departments in the entity based on (i) an intraday arrival rate of a plurality of users, and (ii) an intraday service time rate along with an intraday service time buffer, wherein the number of tills to be operated at the one or more departments in the entity for the intraday is predicted using the one or more models.
7. The processor implemented method of claim 1, wherein the arrival rate and the service time rate are predicted based on one or more influencing parameters, and wherein the one or more influencing parameters comprise number of items on one or more promotions, number of campaigns, one or more seasonal offering days, weather forecast, number of transactions performed at one or more tills operated in the entity.
8. The processor implemented method of claim 1, wherein the step of applying a second combinatorial model on the arrival rate and the service time rate comprises:
predicting a performance accuracy for each till from the number of tills to be operated at the one or more departments in the entity; and
controlling a service time buffer associated with one or more transactions to obtain an enhanced performance accuracy for each till from the number of tills to be operated at the one or more departments in the entity.
9. A system (100), comprising:
a memory (102) storing instructions;
one or more communication interfaces (106); and
one or more hardware processors (104) coupled to the memory (102) via the one or more communication interfaces (106), wherein the one or more hardware processors (104) are configured by the instructions to:
obtain, an input comprising an arrival rate of a plurality of users, wherein the arrival rate pattern is obtained for a plurality of instances, each instance from the plurality of instance is of a specific time interval corresponding to a time duration, wherein the arrival rate of the plurality of users corresponds to one or more departments in an entity;
apply a machine learning model on the arrival rate pattern of the plurality of users to obtain a plurality of arrival rate models, wherein each of the plurality of arrival rate models corresponds to a sub-duration of the time duration;
predict arrival rate for the time duration specific to one or more tills operating at the one or more departments in the entity using one or more arrival rate models from the plurality of arrival rate models by applying the plurality of arrival rate models on historical transactional data obtained from the one or more tills operating at the one or more departments in the entity;
obtain historical data comprising (i) historical service time and (ii) basket size pertaining to historical transactions of the each of the plurality of users, the historical transactions being comprised in the historical data;
apply a combinatorial model on the historical data to obtain a service time rate for each of the plurality of users; and
apply one or more models on the predicted arrival rate and the predicted service time rate controlling a waiting time in a queue to obtain an estimation of a number of tills to be operated at the one or more departments in the entity for a given time duration.
10. The system of claim 9, wherein the service time rate is based on a service time buffer, and wherein the service time buffer is indicative of a time difference between at least two transactions.
11. The system of claim 9, wherein the waiting time in the queue is less than or equal to a predefined waiting time period.
12. The system of claim 9, wherein the combinatorial model comprises a machine learning model and a probabilistic model.
13. The system of claim 9, wherein the one or more models comprise at least one of a queuing model and a simulation model, and wherein the queuing model comprises one of a multi-server-single queue model or a multi-server-multi queue model.
14. The system of claim 9, wherein the one or more hardware processors are further configured by the instructions to predict, for an intraday, a number of tills to be operated at the one or more departments in the entity based on (i) an intraday arrival rate of a plurality of users, and (ii) an intraday service time rate along with an intraday service time buffer, wherein the number of tills to be operated at the one or more departments in the entity for the intraday is predicted using the one or more models.
15. The system of claim 9, wherein the arrival rate and the service time rate are predicted based on one or more influencing parameters, and wherein the one or more influencing parameters comprise number of items on one or more promotions, number of campaigns, one or more seasonal offering days, weather forecast, number of transactions performed at one or more tills operated in the entity.
16. The system of claim 9, wherein the step of applying one or more models on the arrival rate and the service time rate comprises:
predicting a performance accuracy for each till from the number of tills to be operated at the one or more departments in the entity; and
controlling a service time buffer associated with one or more transactions to obtain an enhanced performance accuracy for each till from the number of tills to be operated at the one or more departments in the entity.
, Description:FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See Section 10 and Rule 13)
Title of invention:
TILLS ESTIMATION FOR OPERATING IN ENTITIES AND CONTROLLING ASSOCIATED SERVICE TIME BUFFER USING COMBINATORIAL MODELS
Applicant:
Tata Consultancy Services Limited
A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th floor,
Nariman point, Mumbai 400021,
Maharashtra, India
The following specification particularly describes the invention and the manner in which it is to be performed.
TECHNICAL FIELD
The disclosure herein generally relates to tills optimizer, and, more particularly, to tills estimation for operating in entities and controlling associated service time buffer using combinatorial models.
BACKGROUND
In retail stores, tilling as a major store operation consumes up to 30 percentage of the store labor. Normally the till budget is planned with a flat percent increase or decrease based on the previous year budget and sales forecast of current year, to arrive at a rough capacity planning. It has been observed that many a times, the stores exceed the allocated tilling budget. However, this is not very much visible as the store manager reallocates efforts from other operations to tilling. The determination of number of tills to be opened on any day is currently done based on the store manager judgement and his/her intelligence. This is normally a very reactive process i.e., by looking at the customer footfall in the stores and by looking at other events like weather, number of promotions etc. Therefore, it has been observed that the number of tills opened are far more than needed, thus impacting till costs at the cost of other activities.
SUMMARY
Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems. In one aspect, there is provided a processor implemented method for optimizing number of tills to be operated in an entity. The method comprises obtaining, an input comprising an arrival rate pattern of a plurality of users, wherein the arrival rate pattern is obtained for a plurality of instances, each instance is of a specific time interval corresponding to a time duration, and wherein the arrival rate pattern of the plurality of users corresponds to one or more departments in an entity; applying a machine learning model on the arrival rate pattern of the plurality of users to obtain a plurality of arrival rate models, wherein each of the plurality of arrival rate models corresponds to a sub-duration of the time duration; predicting arrival rate for the time duration specific to one or more tills operating at the one or more departments in the entity using one or more arrival rate models from the plurality of arrival rate models by applying the plurality of arrival rate models on historical transactional data obtained from the one or more tills operating at the one or more departments in the entity; obtaining historical data comprising (i) historical service time and (ii) basket size pertaining to historical transactions of the each of the plurality of users; applying a combinatorial model on the historical data to obtain a predicted service time rate for each of the plurality of users; and applying one or more models on the arrival rate and the service time rate controlling a waiting time in a queue to obtain an estimation of a number of tills to be operated at the one or more departments in the entity for a given time duration.
In an embodiment, the service time rate is based on a service time buffer, and wherein the service time buffer is indicative of a time difference between at least two transactions.
In an embodiment, the waiting time in the queue is less than or equal to a predefined time period.
In an embodiment, the combinatorial model comprises a machine learning model and a probabilistic model.
In an embodiment, the one or more models comprise at least one of a queuing model and a simulation model.
In an embodiment, the queuing model comprises one of a multi-server-single queue model or a multi-server-multi queue model.
In an embodiment, the method further comprises predicting, for an intraday, number of tills to be operated at the one or more departments in the entity based on (i) an intraday arrival rate of a plurality of users, and (ii) an intraday service time rate along with an intraday service time buffer, wherein the number of tills to be operated at the one or more departments in the entity for the intraday is predicted using the one or more models.
In an embodiment, the arrival rate and the service time rate are predicted based on one or more influencing parameters.
In an embodiment, the one or more influencing parameters comprise number of items on one or more promotions, number of campaigns, one or more seasonal offering days, weather forecast, number of transactions performed at one or more tills operated in the entity.
In an embodiment, the step of applying one or more models on the arrival rate and the service time rate comprises: predicting a performance accuracy for each till from the number of tills to be operated at the one or more departments in the entity; and controlling a service time buffer associated with one or more transactions to obtain an enhanced performance accuracy for each till from the number of tills to be operated at the one or more departments in the entity.
In another aspect, there is provided a system for optimizing number of tills to be operated in an entity. The system comprises a memory storing instructions; one or more communication interfaces; and one or more hardware processors coupled to the memory via the one or more communication interfaces, wherein the one or more hardware processors are configured by the instructions to: obtain, an input comprising an arrival rate pattern of a plurality of users, wherein the arrival rate pattern is obtained for a plurality of instances, each instance is of a specific time interval corresponding to a time duration, and wherein the arrival rate pattern of the plurality of users corresponds to one or more departments in an entity; apply a machine learning model on the arrival rate pattern of the plurality of users to obtain a plurality of arrival rate models, wherein each of the plurality of arrival rate models corresponds to a sub-duration of the time duration; predict arrival rate for the time duration specific to one or more tills operating at the one or more departments in the entity using one or more arrival rate models from the plurality of arrival rate models by applying the plurality of arrival rate models on historical transactional data obtained from the one or more tills operating at the one or more departments in the entity; obtain historical data comprising (i) historical service time and (ii) basket size pertaining to historical transactions of the each of the plurality of users; apply a combinatorial model on the historical data to obtain a predicted service time rate for each of the plurality of users; and apply one or more models on the arrival rate and the service time rate controlling a waiting time in a queue to obtain an estimation of a number of tills to be operated at the one or more departments in the entity for a given time duration.
In an embodiment, the service time rate is based on a service time buffer, and wherein the service time buffer is indicative of a time difference between at least two transactions.
In an embodiment, the waiting time in the queue is less than or equal to a predefined time period.
In an embodiment, the combinatorial model comprises a machine learning model and a probabilistic model.
In an embodiment, the one or more combinatorial models comprise at least one of a queuing model and a simulation model.
In an embodiment, the queuing model comprises one of a multi-server-single queue model or a multi-server-multi queue model.
In an embodiment, the one or more hardware processors are further configured by the instructions to predict, for an intraday, number of tills to be operated at the one or more departments in the entity based on (i) an intraday arrival rate of a plurality of users, and (ii) an intraday service time rate along with an intraday service time buffer, wherein the number of tills to be operated at the one or more departments in the entity for the intraday is predicted using the one or more models.
In an embodiment, the arrival rate and the service time rate are predicted based on one or more influencing parameters.
In an embodiment, the one or more influencing parameters comprise number of items on one or more promotions, number of campaigns, one or more seasonal offering days, weather forecast, number of transactions performed at one or more tills operated in the entity.
In an embodiment, the step of applying one or more models on the arrival rate and the service time rate comprises: predicting a performance accuracy for each till from the number of tills to be operated at the one or more departments in the entity; and controlling a service time buffer associated with one or more transactions to obtain an enhanced performance accuracy for each till from the number of tills to be operated at the one or more departments in the entity.
In yet another aspect, there are provided one or more non-transitory machine readable information storage mediums comprising one or more instructions which when executed by one or more hardware processors causes a method for optimizing number of tills to be operated in an entity. The method comprises obtaining, an input comprising an arrival rate pattern of a plurality of users, wherein the arrival rate is obtained for a plurality of instances, each instance is of a specific time interval corresponding to a time duration, and wherein the arrival rate pattern of the plurality of users corresponds to one or more departments in an entity; applying a machine learning model on the arrival rate pattern of the plurality of users to obtain a plurality of arrival rate models, wherein each of the plurality of arrival rate models corresponds to a sub-duration of the time duration; predicting arrival rate for the time duration specific to one or more tills operating at the one or more departments in the entity using one or more arrival rate models from the plurality of arrival rate models by applying the plurality of arrival rate models on historical transactional data obtained from the one or more tills operating at the one or more departments in the entity; obtaining historical data comprising (i) historical service time and (ii) basket size pertaining to historical transactions of the each of the plurality of users; applying a combinatorial model on the historical data to obtain a predicted service time rate for each of the plurality of users; and applying one or more models on the arrival rate and the service time rate controlling a waiting time in a queue to obtain an estimation of a number of tills to be operated at the one or more departments in the entity for a given time duration.
In an embodiment, the service time rate is based on a service time buffer, and wherein the service time buffer is indicative of a time difference between at least two transactions.
In an embodiment, the waiting time in the queue is less than or equal to a predefined time period.
In an embodiment, the combinatorial model comprises a machine learning model and a probabilistic model.
In an embodiment, the one or more models comprise at least one of a queuing model and a simulation model.
In an embodiment, the queuing model comprises one of a multi-server-single queue model or a multi-server-multi queue model.
In an embodiment, the method further comprises predicting, for an intraday, number of tills to be operated at the one or more departments in the entity based on (i) an intraday arrival rate of a plurality of users, and (ii) an intraday service time rate along with an intraday service time buffer, wherein the number of tills to be operated at the one or more departments in the entity for the intraday is predicted using the one or more models.
In an embodiment, the arrival rate and the service time rate are predicted based on one or more influencing parameters.
In an embodiment, the one or more influencing parameters comprise number of items on one or more promotions, number of campaigns, one or more seasonal offering days, weather forecast, number of transactions performed at one or more tills operated in the entity.
In an embodiment, the step of applying one or more models on the arrival rate and the service time rate comprises: predicting a performance accuracy for each till from the number of tills to be operated at the one or more departments in the entity; and controlling a service time buffer associated with one or more transactions to obtain an enhanced performance accuracy for each till from the number of tills to be operated at the one or more departments in the entity.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles:
FIG. 1 depicts a system for predicting number of tills to be operated at one or more departments in an entity for a given time duration and optimizing thereof, in accordance with an embodiment of the present disclosure.
FIG. 2 depicts an exemplary flow chart illustrating a method for predicting number of tills to be operated at one or more departments in an entity for a given time duration and optimizing thereof, using the system of FIG. 1, in accordance with an embodiment of the present disclosure.
FIG. 3 depicts a graphical representation illustrating a time-series pattern of an arrival rate of a plurality of users at one department (e.g., department 1 - clothing department) in the entity, in accordance with an embodiment of the present disclosure.
FIG. 4 depicts a graphical representation illustrating a comparison between actual arrival rate and predicted arrival rate of the plurality of users, in accordance with an embodiment of the present disclosure.
FIG. 5 depicts a graphical representation illustrating a distribution of service time obtained by applying the probabilistic model on the historical data to obtain expected service time/30 minutes, in accordance with an embodiment of the present disclosure.
FIG. 6 depicts a graphical representation of service time being predicted by applying the first combinatorial model by the system, in accordance with an embodiment of the present disclosure.
FIG. 7 depicts a graphical representation illustrating a comparison of tills deployed in various departments versus tills predicted by the system versus best value of tills based on actual arrival rate and service time rate, in accordance with an embodiment of the present disclosure.
DETAILED DESCRIPTION OF EMBODIMENTS
Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the scope of the disclosed embodiments.
Referring now to the drawings, and more particularly to FIGS. 1 through 7, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method.
FIG. 1 depicts a system 100 for predicting number of tills to be operated at one or more departments in an entity for a given time duration and optimizing thereof, in accordance with an embodiment of the present disclosure. In an embodiment, the system 100 includes one or more hardware processors 104, communication interface device(s) or input/output (I/O) interface(s) 106 (also referred as interface(s)), and one or more data storage devices or memory 102 operatively coupled to the one or more hardware processors 104. The one or more processors 104 may be one or more software processing components and/or hardware processors. In an embodiment, the hardware processors can be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor(s) is/are configured to fetch and execute computer-readable instructions stored in the memory. In an embodiment, the system 100 can be implemented in a variety of computing systems, such as laptop computers, notebooks, hand-held devices, workstations, mainframe computers, servers, a network cloud and the like.
The I/O interface device(s) 106 can include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like and can facilitate multiple communications within a wide variety of networks N/W and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. In an embodiment, the I/O interface device(s) can include one or more ports for connecting a number of devices to one another or to another server.
The memory 102 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. In an embodiment, a database 108 is comprised in the memory 102, wherein the database 108 comprises one or more set of time series data captured by one or more sensors attached to various equipment/ (or devices) deployed and being operated in the entity, or computing systems, or any other location. The database 108 further stores information on various departments in the entity. For instance, given an entity such as retail store, various departments may comprise, but are not limited to, groceries, clothes/merchandise, consumer electronics, furniture, and the like.
The information stored in the database 108 further comprises various techniques such as machine learning (ML) models (e.g., univariate ML model, multivariate ML model, regression model such as Random Forest , and the like), probabilistic model(s), queuing model(s), simulation model/ technique(s) as known in the art. The above-mentioned techniques comprised in the memory 102/database 108 are invoked as per the requirement by the system 100 to perform the methodologies described herein. The memory 102 further comprises (or may further comprise) information pertaining to input(s)/output(s) of each step performed by the systems and methods of the present disclosure. In other words, input(s) fed at each step and output(s) generated at each step are comprised in the memory 102 and can be utilized in further processing and analysis.
FIG. 2, with reference to FIG. 1, depicts an exemplary flow chart illustrating a method for predicting number of tills to be operated at one or more departments in an entity for a given time duration and optimizing thereof, using the system 100 of FIG. 1, in accordance with an embodiment of the present disclosure. In an embodiment, the system(s) 100 comprises one or more data storage devices or the memory 102 operatively coupled to the one or more hardware processors 104 and is configured to store instructions for execution of steps of the method by the one or more processors 104. The steps of the method of the present disclosure will now be explained with reference to components of the system 100 of FIG. 1, the flow diagram as depicted in FIG. 2, and the graphical representations of FIGS. 3 through 7. In an embodiment, at step 202 of the present disclosure, the one or more hardware processors 104 obtain, an input comprising an arrival rate pattern of a plurality of users. The arrival rate pattern is obtained for a plurality of instances, each instance is of a specific time interval corresponding to a time duration (e.g., a first time duration). For example, each instance is of say 30 minutes and the time duration is say 1 week (e.g., Monday to Sunday). Further, the arrival rate pattern of the plurality of users corresponds to one or more departments (e.g., say clothing department, furniture, consumer electronics, groceries, and the like) in an entity (e.g., a retail outlet/store, a supermarket, and the like). Below Table 1 depicts input comprising details on (i) the arrival rate pattern of the plurality of users (every 30 minutes), (ii) to which department the users visited, number of users visiting, service time for each user, number of tills operated at department 1, by way of example:
Table 1
Date Department Arrival rate of Users (historic) Service time Number of tills
2021-01-12 08:00:00 Department1 3 20.00 2
2021-01-12 08:30:00 Department1 6 46.40 3
2021-01-12 09:00:00 Department1 33 42.26 6
2021-01-12 09:30:00 Department1 55 46.84 7
2021-01-12 10:00:00 Department1 100 63.34 9
2021-01-12 10:30:00 Department1 130 64.21 12
2021-01-12 11:00:00 Department1 135 63.23 12
2021-01-12 11:30:00 Department1 135 59.69 11
2021-01-12 12:00:00 Department1 160 64.27 12
FIGS. 3 with reference to FIGS. 1-2, are graphical representations depicting arrival rate of users in an entity, in accordance with an embodiment of the present disclosure. More specifically, FIG. 3, with reference to FIGS. 1-2, depicts a graphical representation illustrating a time-series pattern of an arrival rate of a plurality of users at one department (e.g., department 1 – merchandise and household department) in the entity, in accordance with an embodiment of the present disclosure. It is to be understood by a person having ordinary skill in the art that examples are provided for each step pertaining to department 1 (e.g., merchandise and household department) and such examples shall not be construed as limiting the scope of the present disclosure. In other words, for arrival rate model(s) and service time model(s) generation, the system 100 can receive (or configured to receive) inputs such as historical transaction data, arrival rate of users corresponding to other departments (e.g., food, spares, self-checkout tills/or self-checkout terminals/self-checkout departments, etc.). As mentioned above, the step 202 comprises obtaining input such as: (i) time basis of data of 30-minute interval based on aggregation of the number of transactions recorded or total items for distinct time count, as number of customers serving in that period, (ii) data for consecutive 30 min interval (e.g., 25 intervals (08:00 to 20:30) from Monday to Saturday and 13 intervals for Sunday alone, as the store close time is approximately from 11:00 to 17:30. (store timing is configurable), (iii) dates with 30 min interval historical customer arrival rate wherein it is computed with the count of total items sold in that particular time frame (30 min).
In an embodiment, at step 204 of the present disclosure, the one or more hardware processors 104 apply a machine learning model on the arrival rate pattern of the plurality of users to obtain a plurality of arrival rate models, wherein each of the plurality of arrival rate models corresponds to a sub-duration of the time duration. For instance, the expression, ‘duration’ as mentioned above may refer to 1 week and the expression ‘sub-duration’ refers to 1 day of that week. Say, if the duration is of 1 week (e.g., 12-Jan-2021 till 18-Jan-2021), then sub-durations could be 7 days (e.g., 12-Jan-2021, 13-Jan-2021 and so on). In the present disclosure, a prediction model from algorithmic containers that comprises univariate time series, multi-variate time series, regression model(s), or combinations thereof is applied on the arrival rate of the plurality of users to obtain a plurality of arrival rate models (e.g., 7 models). In other words, any one of the univariate time series model, multivariate time series model or regression model such as Random Forest is applied on the arrival rate of the plurality of users to obtain a plurality of arrival rate models, in one embodiment of the present disclosure. Selection of any of the above time series model is based on training of these models and errors (e.g., error such as mean absolute error (MAE), R2 score, acceptable waiting time in queue, labor utilization and predicted versus actuals labor hours) generated during or after completion of training of each of these models using the input (e.g., input as fed to the system at step 202). In other words, during exploration of the input, the input is inspected for training of the above-mentioned models and based on error generated of data in the input, appropriate model amongst the above model is selected which is then applied on the arrival rate pattern of the plurality of users to obtain a plurality of arrival rate models. Each of the plurality of arrival rate models corresponds to a sub-duration (1 day) of the time duration (1 week), in one example embodiment. Below Table 2 depicts the plurality of arrival rate models obtained for each sub-duration, by way of examples:
Table 2
Date Department Arrival rate of Users (Historic) Arrival rate of Users (Predicted) Week_day Model
2021-01-12 11:00 Department1 135 159.9682507682 1 Model 1
2021-01-13 11:00 Department1 130 146.7323836361 2 Model 2
2021-01-14 11:00 Department1 150 160.9910164569 3 Model 3
2021-01-15 11:00 Department1 120 107.1054582858 4 Model 4
2021-01-16 11:00 Department1 115 97.511694951 5 Model 5
2021-01-17 11:00 Department1 100 79.5107265458 6 Model 6
2021-01-18 11:00 Department1 150 147.3742105373 0 Model 0
In an embodiment, at step 206 of the present disclosure, the one or more hardware processors 104 predict arrival rate for the time duration (e.g., 1 week) specific to one or more tills operating at the one or more departments in the entity using one or more arrival rate models from the plurality of arrival rate models by applying the one or more arrival rate models on historical transactional data obtained from the one or more tills operating at the one or more departments in the entity. In an embodiment of the present disclosure, one or more arrival rate model(s) may be automatically identified by the system 100 from the plurality of arrival rate models for predicting the arrival rate for the time duration. Below Table 3 depicts the historical transactional data obtained from the one or more tills operating at the one or more departments in the entity, by way of examples:
Table 3
Date 2021-01-12 08:30:00 2021-01-12 00:00:00 2021-01-12 09:30:00 2021-01-12 10:00:00 2021-01-12 10:30:00 2021-01-12 11:00:00 2021-01-12 11:30:00 2021-01-12 12:00:00 2021-01-12 12:30:00
Department Department 1 Department 1 Department 1 Department 1 Department 1 Department 1 Department 1 Department 1 Department 1
Temperature 23 23 23 24 24 27 27 30 30
Precipitation 0 0 0 0 0 0 0 0 0
Wind_speed 2.2 4.7 4.7 3.4 3.4 3.4 3.4 3.4 3.4
Cloud_cover 0 0 0 0 0 0 0 0 0
Humidity 78.21 78.21 78.21 69.16 69.16 57.87 57.87 45.64 45.64
Condition Clear Clear Clear Clear Clear Clear Clear Clear Clear
Promo_value 0 0 0 0 0 0 0 0 0
National 0 0 0 0 0 0 0 0 0
Regular_day 1 1 1 1 1 1 1 1 1
Location Location 1 Location 1 Location 1 Location 1 Location 1 Location 1 Location 1 Location 1 Location 1
By applying the one or more arrival rate models on the above historical transactional data of Table 3, the arrival rate for the time duration (e.g., 1 week) specific to one or more tills operating at the one or more departments in the entity is predicted. The arrival rate is depicted in below Table 4 by way of examples:
Table 4
Date Department Arrival rate of Users (historic) Users (predicted) Week_day Model
2021-01-12 11:00:00 Department 1 135 159.9682507682 1 Model 1
2021-01-13 11:00:00 Department 1 130 146.7323836361 2 Model 2
2021-01-14 11:00:00 Department 1 150 160.9910164569 3 Model 3
2021-01-15 11:00:00 Department 1 120 107.1054582858 4 Model 4
2021-01-16 11:00:00 Department 1 115 97.511694951 5 Model 5
2021-01-17 11:00:00 Department 1 100 79.5107265458 6 Model 6
2021-01-18 11:00:00 Department 1 150 147.3742105373 0 Model 0
With refence to above Table 4 and FIGS. 1 through 3, FIG. 4 depict a graphical representation illustrating arrival rate prediction for the time duration (e.g., 1 week) specific to one or more tills operating at the one or more departments in the entity using one or more arrival rate models, in accordance with an embodiment of the present disclosure. More specifically, FIG. 4 depicts a graphical representation illustrating a comparison between actual arrival rate and predicted arrival rate of the plurality of users, in accordance with an embodiment of the present disclosure.
In an embodiment, at step 208 of the present disclosure, the one or more hardware processors 104 obtain historical data comprising (i) historical service time and (ii) basket size pertaining to historical transactions of the each of the plurality of users, the historical transactions being comprised in the historical data. Below Table 5 depicts the historical data (i) historical service time and (ii) basket size pertaining to historical transactions of the each of the plurality of users, by way of examples:
Table 5
Date Department Arrival rate of Users (historic) Service time Basket size Number of tills
2021-01-12 08:00:00 Department 1 3 20.00 3.7 2
2021-01-12 08:30:00 Department 1 6 46.40 3.8 3
2021-01-12 09:00:00 Department 1 33 42.26 4.8 6
2021-01-12 09:30:00 Department 1 55 46.84 4.7 7
2021-01-12 10:00:00 Department 1 100 63.34 4.8 9
2021-01-12 10:30:00 Department 1 130 64.21 5.8 12
2021-01-12 11:00:00 Department 1 135 63.23 5.7 12
2021-01-12 11:30:00 Department 1 135 59.69 6.4 11
2021-01-12 12:00:00 Department 1 160 64.27 6.7 12
In an embodiment, at step 210 of the present disclosure, the one or more hardware processors 104 apply a combinatorial model on the historical data (e.g., refer Table 5) to obtain a predicted service time rate for each of the plurality of users. More specifically, for service time, the data collection comprises transaction data (e.g., point of sale transaction data or transaction data associated with each till), wherein the service time for each transaction is calculated by taking the difference between transaction end time and transaction start time. Further, average Service Time for each 30-minute interval is calculated, wherein Average service Time = Total Service Time in that 30-minute interval/ Total number of transactions(customers) in that 30-minute interval. Based on the store/retail outlet working hours, for each date and interval, one record is collected in the dataset. Example: sundays:13 (11:00 to 17:30), weekdays: 25(08:00 to 20:30). Further, outliers related to service time are removed by using InterQuartileRange(IQR) approach. All dates along with 30-minute intervals are generated for the whole dataset and missing data (service time) is imputed with the mean service time values calculated from the historical data in according to the day and time period. Hour, Minute, Month, Day of Month features are generated from “Date” for each record, dummy variables are created for “Day” and “Department”.
Referring to step 210, the combinatorial model comprises a machine learning model and a probabilistic model, in one embodiment of the present disclosure. The expression ‘probabilistic model’ is also referred as ‘mathematical model’ and interchangeably used herein. Mathematical model gives the ability to capture the historical service times in the form of a distribution which is then fed into Machine learning based Regression model (e.g., Random forest) for prediction of service time rate. This results in improving the prediction performance and accuracy. Therefore, the system 100 first applies the probabilistic model and then the machine learning model is applied to predict service time rate or obtain predicted service time rate. In an embodiment, the expressions ‘service time rate’ and ‘predicted service time rate’ may be interchangeably used herein. In other words, at first, Probability distribution of Service Time is fitted using the probabilistic model/mathematical model as known in the art, e.g., for Time: 9:00 to 9:30 (finding the Distribution of Service Time). FIG. 5, with reference to FIGS. 1 through 4, depicts a graphical representation illustrating a distribution of service time obtained by applying the probabilistic model on the historical data to obtain expected service time/30 minutes, in accordance with an embodiment of the present disclosure. After getting expected service time/30 minutes, a predictive model such as a machine learning (ML) model is applied. More specifically, ML-predictive model (e.g., Random Forest) was used to forecast service time. Upon applying the combinatorial model on the historical data, service time rate for each of the plurality of users is predicted as depicted in FIG. 6. More specifically, FIG. 6, with reference to FIGS. 1 through 5, depicts a graphical representation of service time being predicted by applying the combinatorial model by the system 100, in accordance with an embodiment of the present disclosure. More specifically, FIG. 6 depicts a graphical representation illustrating a comparison of service time being predicted versus historic service time. Table 6 depicts the predicted service time, by way of examples:
Table 6
Date Department Arrival rate of Users (historic) Service time (predicted) Number of tills Weekday
2021-01-12 08:00:00 Department 1 3 20.00 2 1
2021-01-12 08:30:00 Department 1 6 46.40 3 1
2021-01-12 09:00:00 Department 1 33 42.26 6 1
2021-01-12 09:30:00 Department 1 55 46.84 7 1
2021-01-12 10:00:00 Department 1 100 63.34 9 1
2021-01-12 10:30:00 Department 1 130 64.21 12 1
2021-01-12 11:00:00 Department 1 135 63.23 12 1
2021-01-12 11:30:00 Department 1 135 59.69 11 1
2021-01-12 12:00:00 Department 1 160 64.27 12 1
At step 212 of the present disclosure, the one or more hardware processors 104 apply one or more models on the arrival rate and the service time rate controlling a waiting time in a queue to obtain an estimation of a number of tills to be operated at the one or more departments in the entity for a given time duration (e.g., a second time duration - say any number of day(s) or weeks) with reference to the time duration (e.g., the first time duration) mentioned in step 202. For instance, the given time duration may refer to (i) current day (e.g., say 20-Jan-2021) from the time duration (12-Jan-2021 till 18-Jan-2021) or (ii) any number of days (e.g., 20-Jan-2021, 21-Jan-2021, and the like) or weeks (e.g., week of 20-Jan-2021 till 31-Jan-2021) from the time duration (12-Jan-2021 till 18-Jan-2021). In an embodiment, the arrival rate and the service time rate are predicted based on one or more influencing parameters, wherein the one or more influencing parameters comprise number of items on one or more promotions, number of campaigns, one or more seasonal offering days, weather forecast, number of transactions performed at one or more tills operated in the entity. Few examples of the influencing parameters are depicted in Table 3. It is to be understood by a person having ordinary skill in the art or person skilled in the art that examples of the influencing parameters as specified and described herein shall not be construed as limiting the scope of the present disclosure. The one or more models comprise at least one of a queuing model and a simulation model, in one embodiment of the present disclosure. In other words, the one or more models comprise one of queuing model, or a simulation model. The queuing model is one of a multi-server-single queue model or a multi-server-multi queue model. Below Table 7 depicts number of tills estimated/predicted by the system 100 to be operated at a given time duration, by way of examples. More specifically, Table 7 depicts number of tills predicted by both the queuing model and simulation technique.
Table 7
Date Arrival rate of Users (predicted) Department Service time (predicted) Service time rate (predicted) Simulation technique_tills estimated/predicted for operating in the entity Queuing model_tills estimated/predicted for operating in the entity
2021-12-11 11:00:00 98.1201248169 Department1 66.41 18.0695678362 7 7
2021-12-11 11:30:00 94.6656112671 Department 1 66.5 18.045112782 7 7
2021-12-11 12:00:00 95.1092834473 Department 1 66.87 17.9452669358 7 7
2021-12-11 12:30:00 94.801322937 Department 1 66.56 18.0288461538 7 7
2021-12-11 13:00:00 94.8413619995 Department 1 65.66 18.2759671033 7 6
2021-12-11 13:30:00 91.2748794556 Department 1 65.5 18.320610687 7 6
2021-12-11 14:00:00 93.933380127 Department 1 64.33 18.6538162599 7 6
2021-12-11 14:30:00 94.7294845581 Department 1 64.31 18.6596174778 6 6
2021-12-11 15:00:00 94.2357406616 Department 1 64.05 18.7353629977 6 6
Conventionally, there could be approaches that may have used queuing model and simulation model in isolation or independently. However, applying the one or more models on the arrival rate time and service time controlling the waiting time in a queue enables faster prediction or estimation of number of tills to be operated at a given time duration. Moreover, queuing model is faster compared to simulation method in recommending the number of tills. However, when incoming data (e.g., arrival rate, service time, and the like) to the system 100 is not uniform in the time-period its accuracy decreases which is when the simulation-based approach (e.g., the simulation technique) is selected by the system 100 for making till recommendations/predictions. In case the arrival rate pattern (which is in the form of time series data) and the service time rate has uniform time series, in such scenarios, only queueing model may be applied on the predicted arrival rate and the predicted service time rate to predict an estimation of the number of tills to be operated in the one or more departments of the entity. Alternatively, when time series of the arrival rate pattern and service time pattern is determined as non-uniform in nature when the queuing model is applied, there may be a variation in the performance with respect to estimated number of tills to be operated. In such scenarios, output determined by the queuing model may be discarded, and the simulation model/technique is directly applied on the predicted arrival rate and predicted service time rate and the number of tills operated in the one or more departments of the entity is predicted. It is to be understood by a person having ordinary skill in the art or person skilled in the art that the departments various across entities and examples of such departments shall not be construed as limiting the scope of the present disclosure.
The simulation technique as applied by the system 100 of the present disclosure can be better understood by the following description: Say, at first, a virtual environment of queuing system for multi-server multi-queue is created. Then behaviour of Number of customer/30 min and service time /30 min is observed/estimated. Further, distribution of Arrival and service time is historically determined. Then Customer Arrival and corresponding Service Time is generated from the observed distribution. Then the following key parameters are measured:
clock = 0.0 # simulation clock
n_tills = N_TILLS # number of servers
num_arrivals = 0 # total number of arrivals
t_arrival = self.gen_int_arr() # time of next arrival
t_departure = [[float('inf')] for _ in range(self.n_tills)] # departure time from till
dep_sum = [0 for _ in range(self.n_tills)] # Sum of service times by servers
till_state = [0 for _ in range(self.n_tills)] # current state of servers (binary)
total_wait_time = [0 for _ in range(self.n_tills)] # total wait time
Upon measuring the above key parameters, on the basis of simulation time duration and with the help of the state of simulation at last customer, N (Number of Arrivals) , Lq (Average Queue Length) , Wq (Averge waiting time in Queue), Ut ( Average Queue utilization) are calculated. The above calculation are better understood by the following examples:
Time duration = 60 min
Number of Tills = 3
Arrival Rate = 100 customer/ H
Service Rate = 50 Customer / H
State of simulation at last iteration of given time duration (60 min) period
clock = 59.93,
num_arrivals = 106
t_arrival = 60.50,
t_departure =[[float('inf')], [float('inf'), 60.31], float('inf')]],
dep_sum = [31.73, 32.70, 27.20],
till_state = [0, 0, 0],
total_wait_time = [4.92, 11.56, 6.55],
num_of_departures = [40, 34, 31],
num_in_q = [0, 0, 0],
number_in_queue = 26
1) Number of Arrivals = num_arrivals
total waiting time_q = sum of waiting time from all the tills = 4.92 + 11.56 + 6.55 = 23.03
2) Avg. waiting time in queue = total_wait_time / number_in_queue = 23.03/26 = 0.8857 min
Utilization by till = dep_sum / time duration
Utilization by till = [0.5289, 0.5450, 0.4534]
3) Avg. utilization = mean of utilization by till = 0.5091
4) Avg. Queue length is calculated by taking average of observation having queue length
The above process may be repeated for ‘n’ times (e.g., say 100 times), in one embodiment of the present disclosure. The outputs may be observed, and final output is an average of the simulated outputs at each iteration. For instance, N = 101.07,
Lq = 1.2818, Wq = 1.5091, and Ut = 0.6758.
In an embodiment, the service time rate is based on a service time buffer, wherein the service time buffer is indicative of a time difference between at least two transactions. Such time difference, for instance, may include casual conversation(s) between the customer(s) and user operating a corresponding till in a department. The expression ‘service time buffer’ may also be referred as ‘auto buffer percentage’ and interchangeably used herein. The service time buffer detection is performed as below:
Accuracy booster model used for auto buffer percentage detection.
Criteria: Acceptance Percentage (AP) Cutoff > = 95%
Iteration 1:
Initial buffer at department level = {Department 1=0%, Department 2=0%, Department 3= 0%, Department 4= 0 %}
AP = {Department 1= 94.73 %, Department 2 = 100 %, Department 3 = 100 %, Department 4= 100%}
Non -Compliance Department: Department 1
Iteration 2:
buffer = {Department 1= 5 %, Department 2=0 %, Department 3= 0 %, Department 4= 0 %}
AP = {Department 1= 100 %, Department 2 = 100 %, Department 3 = 100 %, Department 4= 100%}
Further, the waiting time in the queue is less than or equal to a predefined waiting time period, which is used for prediction or estimation of the number of tills to be operated. For instance, the predefined waiting time period is 2 minutes (or 120 seconds). The predefined waiting time period is configurable according to the requirements and may be subject to dynamic change by the system 100, in one example embodiment. Below Table 8 depicts examples of waiting time based on predicted arrival rate of the plurality of users, predicted service time for each user at respective tills being operated at various departments in the entity.
Table 8
Date Arrival rate of Users (predicted) Service time (predicted) Tills Waiting time in a queue (W_q)
2021-01-12 08:30:00 3.31 60.54 1 0.13
2021-01-12 09:00:00 10.90 63.89 1 0.67
2021-01-12 09:30:00 38.55 66.56 3 1.00
2021-01-12 10:00:00 75.61 66.44 5 1.40
In an embodiment, the hardware processors 104 further predict, for an intraday, number of tills to be operated at the one or more departments in the entity based on (i) an intraday arrival time rate of a plurality of users, and (ii) an intraday service time rate along with an intraday service time buffer, wherein the number of tills to be operated at the one or more departments in the entity for the intraday is predicted using the one or more models. For intraday model, input is last 50 recent intervals deviation of actual and predicted based on model output and historic. Out of 50 lag dynamic selection of influential lags is based on Xgboost feature importance. For Arrival Rate estimation/prediction, SARIMAX Model has been used as known in the art. Based on hyper parameters the model selects recent data only and feature selection is based on Random Forest Feature selection method.
For SARIMA Model, a Grid Search based on AIC (Akaike Information Criterion) was used to find the optimal values of SARIMA Parameters at which AIC is minimum:
p, d, q = range (0, 2), 1, range (0, 3)
Ps, D, Qs = range (0, 2), 1, range (0, 3)
s = seasonality
For Random Forest Regressor, the system 100 used Random Grid Search Optimization based on MAE to find Optimal Value of Hyper parameters at which MAE is minimum:
# Number of trees in random forest
n_estimators = [int(x) for x in np.linspace(start=100, stop=1000, num=10)]
# Number of features to consider at every split
# Maximum number of levels in tree
max_depth = [int(x) for x in np.linspace(5, 25, num=10)]
max_depth.append(None)
# Minimum number of samples required to split a node
min_samples_split = [2, 3, 5]
# Minimum number of samples required at each leaf node
min_samples_leaf = [1, 2, 4]
# Method of selecting samples for training each tree
bootstrap = [True, False]
Below are tables that depict tills prediction for both long term duration and intraday. More specifically, tables depict a comparison of historic and predicted tills for long term duration and intraday. For instance, Table 9 depicts actuals versus long term predicted/estimated tills to be operated in the entity.
Table 9
Date Actual tills Optimized tills (historic) Number of tills predicted/estimated by the present disclosure using combinatorial models
01/27/2021 135 82 84
Relative Till percentage % 100% 60.74% 69.62%
Table 10 depicts analysis of 1 day output with reference to Table 9.
Table 10
Findings Value
Number of observations 25
Predicted tills = optimum tills 10
Predicted tills > optimum tills 13
Predicted tills < optimum tills 2
Acceptance percentage 92%
Table 11 depicts actuals versus long term predicted/estimated tills to be operated in the entity.
Table 11
Date Actual tills Optimized tills (historic) Number of tills predicted/estimated by the present disclosure using combinatorial models
01/27/2021 135 82 88
Relative Till percentage % 100% 60.74% 65.15%
Table 12 depicts analysis of 1 day output with reference to Table 11.
Table 12
Findings Value
Number of observations 25
Predicted tills = optimum tills 17
Predicted tills > optimum tills 7
Predicted tills < optimum tills 1
Acceptance percentage 96%
When the estimation or prediction of the number of tills is being made by the system 100, the system 100 also predicts a performance accuracy for each till from the predicted number of tills to be operated at the one or more departments in the entity. The performance accuracy is predicted based on the comparison of estimated number of tills versus historically tills operated. Such comparison results in obtaining best value of tills based on the actual arrival rate and the actual service time rate. The best value of tills provides with the information on how each till tends to operate when deployed/opened in the one or more departments in the entity even before these are being operated or opened. FIG. 7, with reference to FIGS. 1 through 6, depicts a graphical representation illustrating a comparison of tills deployed in various departments versus tills predicted by the system 100 versus best value of tills based on actual arrival rate and service time rate, in accordance with an embodiment of the present disclosure. More specifically, FIG. 7 depicts how predicted tills for 30 min is close to optimum tills and reduction in deployed tills. The performance accuracy is further used to control a service time buffer associated with one or more transactions to obtain an enhanced performance accuracy for each till from the number of tills to be operated at the one or more departments in the entity. In other words, service time buffer is optimized to obtain an enhanced performance accuracy or optimized performance accuracy in case the system 100 determines that the prediction of tills estimated can be further improvised. Below are the experimental/simulation results performed by the system 100 and the method of the present disclosure.
Results:
Following are the definitions used in the context of the present disclosure and its method:
Actual Tills: Actual Number of tills is the number of historically used tills at store-department level.
Model Recommended/Predicted Tills: Predicted Number of tills are generated by queuing theory (or queuing model)/Simulation technique by using predicted Arrival rate and predicted Service time at store-department level.
Best Value/Optimum Tills: Optimum number of tills generated by queuing theory/Simulation by using actual Arrival rate and actual Service time at store-department level.
Accepted Event: Accepted Event is defined for a particular time interval if predicted tills are more or equal to optimum tills.
Acceptance Percentage: Acceptance percentage is the ratio of total accepted events to total events.
Below is the output of the system 100 (e.g., number of tills to be operated) prior to controlling the service time/service time buffer for an intraday.
For department 1 (e.g., clothing department). Below Table 13 depicts actual versus intraday predicted tills
Table 13
Date Actual tills Optimized tills (historic) Number of tills predicted/estimated by the present disclosure using combinatorial models
01/25/2021 174 77 90
Relative Till percentage % 100% 44.25% 71.72%
Table 14 depicts analysis of 1 day output with reference to Table 13.
Table 14
Findings Value
Number of observations 19
Predicted tills = optimum tills 5
Predicted tills > optimum tills 11
Predicted tills < optimum tills 3
Acceptance percentage 84.21%
After applying accuracy booster method, acceptance percentage increased to 100 %. Table 15 depicts actuals versus intraday predicted/estimated tills to be operated in the entity with accuracy booster model.
Table 15
Date Actual tills Optimized tills (historic) Number of tills predicted/estimated by the present disclosure using combinatorial models
01/25/2021 174 77 114
Relative Till percentage % 100% 44.25% 65.51%
Table 16 depicts analysis of 1 day output with reference to Table 15.
Table 16
Findings Value
Number of observations 19
Predicted tills = optimum tills 1
Predicted tills > optimum tills 18
Predicted tills < optimum tills 0
Acceptance percentage 100%
Additionally, the systems and methods of the present disclosure are configured to predict which Till need to be opened depending upon the hour of the day and the user/customer shopping path, till proximity to the exit gates, queue length, and the like. Such Till prediction can be made by the system and method of the present disclosure to suggest the specific tills to be opened to minimize the user/customer walking time in the store/retail outlet. Moreover, it is to be understood by person having ordinary skill in the art or person skilled in the art that examples of machine learning models, probabilistic model(s), queueing model(s), simulation model/technique(s) as implemented by the system 100 shall not be construed as limiting the scope of the present disclosure. In other words, the system 100 and the method of the present disclosure can implement any other variants of the machine learning models, probabilistic model(s), queueing model(s), simulation model/technique(s), in one example embodiment.
System and method of the present disclosure implement various models and predict the number of tills to be opened, to enable long term and intraday planning for Till operations in entities (e.g., retail stores). This further enables long term and intraday optimization of labors/workforce management wherein an entity at any given time duration can determine how many Tills are to be opened which need to be assisted or operated by way of operators (e.g., users). Alternatively, the Tills can be opened wherein the Tills serve as self-checkout cash register(s). The long term till plan which is typically 3 weeks is required for the enterprise labor management system to do the labor planning. The intraday till prediction is required considering the events on the specific day so that a store manager can plan and prioritize tasks based on the labor availability. Number of tills to be opened primarily depends on the customer arrival rate patterns (or arrival pattern) and service rate of the tills (or service time at the tills).
As it is evident, that customer arrival pattern varies significantly by month, week of the month, day of the week, hour of the day. This also varies by the store department. For example, clothing department may have lesser customers in the mornings and more in the evening whereas Food department may have more customers in the mornings and afternoons. Similarly, there are also other features that would impact the arrival pattern such as number of promotions running on the store, weather, planned events, special days (e.g., Thanks giving day, Festivals, and the like). Intraday Till labor optimization is done by improving customer transaction prediction based on current behavior/trend. Customer service rate at POS (also referred as point of sale terminal or Till) is dynamically predicted for Till labor optimization. Service rate is the number of customers that can be served within a period. This in turn is directly proportional to the basket size. Typically, the basket in the mornings and afternoons can be less compared to the basket size in the evenings however there can be exceptions on weekends and holidays. The service rate also varies based on the skill level of cashiers. Some cashiers may exhibit higher performance than others. Till labor Prediction is done with even 4 weeks of historical/training data. Using either of two approaches, queuing theory methodology / simulation approach/model/technique, the predicted customer arrivals rate and service rate distributions are applied and parameters such as number of tills, queue depth, wait time of the queue are predicted. The system 100 is further configured to enable altering various parameters and check the impacts and provide what-if analysis. For example, if x number of tills are used then what would be the customer wait time in the queue.
Customer service time at POS is dynamically predicted for Till optimization wherein embodiments of the present disclosure consider salient distribution of Service time identification and ideal hyper parameters to optimize expected service time. More specifically, optimal service time is derived by the day of the week, hour of the day by identifying the service time distribution using technical components Random grid search, Empirical distribution function and considering influencing factors/parameters. The right service distribution is identified using the cross-validation method and least MAE criteria. Further, automated fitment evaluation of Normal distribution, Exponential distribution, Gamma distribution with Classical probability approach is done to find expected service time and random forest regressor with auto hyper parameter tuning to predict the expected service time for given time window. Furthermore, service time distribution model fitment is measured through Maximum likelihood estimator, R2 score, MAE for determination of right model.
The service time prediction is dynamic for efficient Till optimization and work force management/optimization. In order to achieve the service time prediction, input transaction data captured from POS devices; current queue length captured from video cameras are used to determine the transaction service time by various dimensions. Service time prediction is done by day of the week, hour of the day and considering various influencing factors like number of items of promotion, active events in the store. The model is flexible to evaluate any other additional features, when supplied. The system 100 recommends an optimal increment on the predicted service time to increase the acceptance percentage of the scenarios during the model training and fitment. Automated validation of the predicted service time with actual service time in the very next period through near real-time data transfer enables efficient Till optimization and management.
Moreover, the system is configured to determining tills to be operated for an intraday thereby optimizing workforce management. This improves customer transaction prediction based on current behavior/trend wherein this is enabled by capturing real time or near real-time data corresponding to the current behavioral trend such as customer transactions, customers in the queue, number of tills opened is done, and the like. Customer arrival prediction is improved based on trend by fitting between long term Till plan, live data and deviation, using dynamic Lag Generator based on recency and consideration of number of lags based on least MAE approach.
The customer arrival and service time increment enables to meet targeted acceptable percentages based on the prediction versus actuals and boosting algorithm to capture trend recency. Further, implementation of simulation model/technique enables simulating the increase or decrease in the average time the customer spends in the queue by increase or decrease in the number of tills to be opened.
The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims.
It is to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g. any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g. hardware means like e.g. an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g. an ASIC and an FPGA, or at least one microprocessor and at least one memory with software processing components located therein. Thus, the means can include both hardware means and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software means. Alternatively, the embodiments may be implemented on different hardware devices, e.g. using a plurality of CPUs.
The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various components described herein may be implemented in other components or combinations of other components. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
It is intended that the disclosure and examples be considered as exemplary only, with a true scope of disclosed embodiments being indicated by the following claims.
| # | Name | Date |
|---|---|---|
| 1 | 202121013544-STATEMENT OF UNDERTAKING (FORM 3) [26-03-2021(online)].pdf | 2021-03-26 |
| 2 | 202121013544-REQUEST FOR EXAMINATION (FORM-18) [26-03-2021(online)].pdf | 2021-03-26 |
| 3 | 202121013544-FORM 18 [26-03-2021(online)].pdf | 2021-03-26 |
| 4 | 202121013544-FORM 1 [26-03-2021(online)].pdf | 2021-03-26 |
| 5 | 202121013544-FIGURE OF ABSTRACT [26-03-2021(online)].jpg | 2021-03-26 |
| 6 | 202121013544-DRAWINGS [26-03-2021(online)].pdf | 2021-03-26 |
| 7 | 202121013544-DECLARATION OF INVENTORSHIP (FORM 5) [26-03-2021(online)].pdf | 2021-03-26 |
| 8 | 202121013544-COMPLETE SPECIFICATION [26-03-2021(online)].pdf | 2021-03-26 |
| 9 | 202121013544-Proof of Right [24-06-2021(online)].pdf | 2021-06-24 |
| 10 | 202121013544-FORM-26 [14-10-2021(online)].pdf | 2021-10-14 |
| 11 | Abstract1.jpg | 2021-10-19 |
| 12 | 202121013544-FER.pdf | 2023-01-19 |
| 13 | 202121013544-OTHERS [22-05-2023(online)].pdf | 2023-05-22 |
| 14 | 202121013544-FER_SER_REPLY [22-05-2023(online)].pdf | 2023-05-22 |
| 15 | 202121013544-DRAWING [22-05-2023(online)].pdf | 2023-05-22 |
| 16 | 202121013544-COMPLETE SPECIFICATION [22-05-2023(online)].pdf | 2023-05-22 |
| 17 | 202121013544-CLAIMS [22-05-2023(online)].pdf | 2023-05-22 |
| 1 | 202121013544E_16-01-2023.pdf |