Abstract: As the withering process of tea leaves takes a long time to reach a desired moisture level, estimating when it is time to move to the next step to reach the target tea leaf moisture is difficult and inefficient. Method and system disclosed herein provide an approach for withering schedule prediction of tea leaves. The system, by performing a spectral data analysis on an image of a plurality of tea leaves, estimates the moisture percentage in the plurality of tea leaves, for a selected time stamp. Based on the predicted moisture level, a current temperature value, a current relative humidity value, and a current time stamp, the system generates a withering schedule for the plurality of tea leaves. The generated withering schedule is fine-tuned based on a course correction of an impact of deviation in one or more ambient parameters on the prediction of the withering schedule.
Description:FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See Section 10 and Rule 13)
Title of invention:
METHOD AND SYSTEM FOR WITHERING SCHEDULE PREDICTION OF TEA LEAVES
Applicant:
Tata Consultancy Services Limited
A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th Floor,
Nariman Point, Mumbai 400021,
Maharashtra, India
The following specification particularly describes the invention and the manner in which it is to be performed.
TECHNICAL FIELD
The disclosure herein generally relates to image processing, and, more particularly, to a method and system for withering schedule prediction of tea leaves based on image analysis and machine learning models.
BACKGROUND
The intricate process of tea production encompasses several stages, and among them, withering stands out as a pivotal initial step. Withering, crucial for moisture extraction from tea leaves, plays a key role in diminishing their weight and enhancing flexibility for subsequent processing. Precise control of this stage bears substantial influence on the ultimate quality of tea, particularly in terms of flavor and aroma.
However, the monitoring and control of withering process poses various challenges. Some of the challenges are listed here. 1. Slow Withering Process Monitoring: Tea withering takes a long time (upto 16 hours) to reach the desired moisture level. Estimating when it is time to move to the next step to reach the target tea leaf moisture is difficult and inefficient. 2. Time-Consuming Moisture determination: Checking leaf moisture using a microwave oven is a tedious process (which takes about 15 minutes), and is a destructive process. So, most of the time, less precise methods are used. 3. Managing Multiple Withering Troughs: In a tea factory, when multiple withering troughs (usually 20+) are close to the target moisture level (LT), using a shared microwave oven for testing becomes too slow and complicated. 4. Real-time Monitoring Difficulties: Lack of procedure for accurately and quickly monitor the moisture levels in multiple troughs, especially when they are near the critical LT point, which delays moving the tea leaves to the next processing stage.
SUMMARY
Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems. For example, in one embodiment, a processor implemented method is provided. The method includes: receiving, via one or more hardware processors, a) a plurality of captured images of a plurality of tea leaves, and b) historical data on cycle-wise withering details of the plurality of tea leaves from a plurality of sources, as input, wherein the historical data is with respect to a plurality of parameters comprising moisture percentage present in the tea leaves at a plurality of time stamps, and associated room temperature and relative humidity; preprocessing the input data, via the one or more hardware processors, to generate a pre-processed data; estimating, by processing the pre-processed data using a trained moisture percentage estimation model via the one or more hardware processors, a moisture percentage in the plurality of tea leaves, for a selected time stamp; and generating, by a trained time series forecasting model via the one or more hardware processors, a withering schedule for the plurality of tea leaves, based on a) the estimated moisture percentage, b) current temperature value, c) current relative humidity value, and d) a current timestamp.
In an embodiment of the method, the predicted withering schedule is fine-tuned, via the one or more hardware processors, by: performing a course correction of impact of deviation in one or more ambient parameters on the withering schedule; determining whether impact of the course correction on remaining withering schedule is within a defined threshold; and re-generating the withering schedule if the impact is determined as exceeding the defined threshold.
In an embodiment of the method, the moisture percentage estimation model is trained to generate the prediction with respect to the moisture percentage, by: receiving a plurality of spectral images of a plurality of reference tea leaves captured using a spectral camera device; preprocessing the received plurality of spectral images of the plurality of tea leaves to generate a preprocessed spectral data; extracting a foreground data from the preprocessed data, wherein the foreground data comprises a plurality of target areas of the plurality of reference tea leaves in the plurality of spectral images; transforming the foreground data of each of the plurality of reference tea leaves to an image matrix; extracting one or more bounding boxes from the image matrix using a bounding box classifier; estimating mean of pixel values associated with each of the plurality of reference tea leaves, for each of a plurality of bands in the preprocessed spectral data, by overlaying each of the one or more bounding boxes on the spectral image of each of the plurality of tea leaves; mapping the mean of pixels values with a moisture percentage; and training the moisture percentage estimation model with the mean of pixels values and the mapped moisture percentage, to generate the trained moisture percentage estimation model.
In another embodiment, a system is provided. The system includes one or more hardware processors, a communication interface, and a memory storing a plurality of instructions. The plurality of instructions cause the one or more hardware processors to: receive a) a plurality of captured images of a plurality of tea leaves, and b) historical data on cycle-wise withering details of the plurality of tea leaves from a plurality of sources, as input, wherein the historical data is with respect to a plurality of parameters comprising moisture percentage present in the tea leaves at a plurality of time stamps, and associated room temperature and relative humidity; preprocess the input data to generate a pre-processed data; estimate by processing the pre-processed data using a trained moisture percentage estimation model, a moisture percentage in the plurality of tea leaves, for a selected time stamp; and generate by a trained time series forecasting model, a withering schedule for the plurality of tea leaves, based on a) the estimated moisture percentage, b) current temperature value, c) current relative humidity value, and d) a current timestamp.
In an embodiment of the system, the one or more hardware processors are configured to fine-tune the predicted withering schedule, by: performing a course correction of impact of deviation in one or more ambient parameters on the withering schedule; determining whether impact of the course correction on remaining withering schedule is within a defined threshold; and re-generating the withering schedule if the impact is determined as exceeding the defined threshold.
In another embodiment of the system, the one or more hardware processors are configured to train the moisture percentage estimation model to estimate the moisture percentage, by: receiving a plurality of spectral images of each of the plurality of reference tea leaves captured using a spectral camera device; preprocessing of the received plurality of spectral images of the plurality of reference tea leaves to generate a preprocessed spectral data; extracting a foreground data from the preprocessed data, wherein the foreground data comprises of a plurality of target areas of the plurality of reference tea leaves in the plurality of spectral images; transforming the foreground data of each of the plurality of reference tea leaves to an image matrix; extracting one or more bounding boxes from the image matrix using a bounding box classifier; estimating mean of pixel values associated with each of the plurality of reference tea leaves, for each of a plurality of bands in the preprocessed spectral data, by overlaying each of the one or more bounding boxes on the spectral image of each of the plurality of reference tea leaves; mapping the mean of pixel values with a moisture percentage; and training the moisture percentage estimation model with the mean of pixel values and the mapped moisture percentage, to generate the trained moisture percentage estimation model.
In yet another embodiment, a non-transitory computer readable medium is provided. The non-transitory computer readable medium includes a plurality of instructions, which when executed, cause one or more hardware processors to: receive a) a plurality of captured images of a plurality of tea leaves, and b) historical data on cycle-wise withering details of the plurality of tea leaves from a plurality of sources, as input, wherein the historical data is with respect to a plurality of parameters comprising moisture percentage present in the tea leaves at a plurality of time stamps, and associated room temperature and relative humidity; preprocess the input data to generate a pre-processed data; estimate by processing the pre-processed data using a trained moisture percentage estimation model, a moisture percentage in the plurality of tea leaves, for a selected time stamp; and generate by a trained time series forecasting model, a withering schedule for the plurality of tea leaves, based on a) the estimated moisture percentage, b) current temperature value, c) current relative humidity value, and d) a current timestamp.
In an embodiment of the non-transitory computer readable medium, the one or more hardware processors are configured to fine-tune the predicted withering schedule, by: performing a course correction of impact of deviation in one or more ambient parameters on the withering schedule; determining whether impact of the course correction on remaining withering schedule is within a defined threshold; and re-generating the withering schedule if the impact is determined as exceeding the defined threshold.
In another embodiment of the non-transitory computer readable medium, the one or more hardware processors are configured to train the moisture percentage estimation model to estimate the moisture percentage, by: receiving a plurality of spectral images of each of the plurality of reference tea leaves captured using a spectral camera device; preprocessing of the received plurality of spectral images of the plurality of reference tea leaves to generate a preprocessed spectral data; extracting a foreground data from the preprocessed data, wherein the foreground data comprises of a plurality of target areas of the plurality of reference tea leaves in the plurality of spectral images; transforming the foreground data of each of the plurality of reference tea leaves to an image matrix; extracting one or more bounding boxes from the image matrix using a bounding box classifier; estimating mean of pixel values associated with each of the plurality of reference tea leaves, for each of a plurality of bands in the preprocessed spectral data, by overlaying each of the one or more bounding boxes on the spectral image of each of the plurality of reference tea leaves; mapping the mean of pixel values with a moisture percentage; and training the moisture percentage estimation model with the mean of pixel values and the mapped moisture percentage, to generate the trained moisture percentage estimation model.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles:
FIG. 1 illustrates an exemplary system for predicting a withering schedule for tea leaves, according to some embodiments of the present disclosure.
FIG. 2 is a flow diagram depicting steps involved in the process of predicting the withering schedule for tea leaves, using the system of FIG. 1, according to some embodiments of the present disclosure.
FIG. 3 is a flow diagram depicting steps involved in the process of fine-tuning the predicted withering schedule for tea leaves, using the system of FIG. 1, according to some embodiments of the present disclosure.
FIGS. 4A and 4B (collectively referred to as FIG. 4) is a flow diagram depicting steps involved in the process of training a moisture percentage estimation model to generate prediction with respect to moisture percentage in a plurality of tea leaves, using the system of FIG. 1, according to some embodiments of the present disclosure.
DETAILED DESCRIPTION OF EMBODIMENTS
Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the scope of the disclosed embodiments.
Monitoring and control of withering process poses various challenges. Some of the challenges are listed here. 1. Slow Withering Process Monitoring: Tea withering takes a long time (upto 16 hours) to reach the desired moisture level. Estimating when it is time to move to the next step to reach the target tea leaf moisture is difficult and inefficient. 2. Time-Consuming Moisture determination: Checking leaf moisture using a microwave oven is a tedious process (which takes about 15 minutes), and is a destructive process. So, most of the time, less precise methods are used. 3. Managing Multiple Withering Troughs: In a tea factory, when multiple withering troughs (usually 20+) are close to the target moisture level (LT), using a shared microwave oven for testing becomes too slow and complicated. 4. Real-time Monitoring Difficulties: Lack of procedure for accurately and quickly monitor the moisture levels in multiple troughs, especially when they are near the critical LT point, which delays moving the tea leaves to the next processing stage.
In order to address these challenges, method and system disclosed herein provide an approach of withering schedule monitoring in which the system, by performing a spectral data analysis on a plurality of spectral images of a plurality of tea leaves, estimated a moisture percentage in the plurality of tea leaves, for a selected time stamp. Further, based on the estimated moisture level, a current temperature value, a current relative humidity value, and a current time stamp, the system generates a withering schedule for the plurality of tea leaves. The generated withering schedule maybe fine-tuned based on a course correction of an impact of deviation in one or more ambient parameters on the prediction of the withering schedule.
Referring now to the drawings, and more particularly to FIG. 1 through FIG. 4, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method.
FIG. 1 illustrates an exemplary system for withering schedule prediction, according to some embodiments of the present disclosure. The system 100 includes or is otherwise in communication with hardware processors 102, at least one memory such as a memory 104, an I/O interface 112. The hardware processors 102, memory 104, and the Input /Output (I/O) interface 112 may be coupled by a system bus such as a system bus 108 or a similar mechanism. In an embodiment, the hardware processors 102 can be one or more hardware processors.
The I/O interface 112 may include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like. The I/O interface 112 may include a variety of software and hardware interfaces, for example, interfaces for peripheral device(s), such as a keyboard, a mouse, an external memory, a printer and the like. Further, the I/O interface 112 may enable the system 100 to communicate with other devices, such as web servers, and external databases.
The I/O interface 112 can facilitate multiple communications within a wide variety of networks and protocol types, including wired networks, for example, local area network (LAN), cable, etc., and wireless networks, such as Wireless LAN (WLAN), cellular, or satellite. For the purpose, the I/O interface 112 may include one or more ports for connecting several computing systems with one another or to another server computer. The I/O interface 112 may include one or more ports for connecting several devices to one another or to another server.
The one or more hardware processors 102 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, node machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the one or more hardware processors 102 is configured to fetch and execute computer-readable instructions stored in the memory 104.
The memory 104 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random-access memory (SRAM) and dynamic random-access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. In an embodiment, the memory 104 includes a plurality of modules 106.
The plurality of modules 106 include programs or coded instructions that supplement applications or functions performed by the system 100 for executing different steps involved in the process of the withering schedule prediction being performed by the system of FIG. 1. The plurality of modules 106, amongst other things, can include routines, programs, objects, components, and data structures, which performs particular tasks or implement particular abstract data types. The plurality of modules 106 may also be used as, signal processor(s), node machine(s), logic circuitries, and/or any other device or component that manipulates signals based on operational instructions. Further, the plurality of modules 106 can be used by hardware, by computer-readable instructions executed by the one or more hardware processors 102, or by a combination thereof. The plurality of modules 106 can include various sub-modules (not shown). The plurality of modules 106 may include computer-readable instructions that supplement applications or functions performed by the system 100 for the withering schedule prediction.
The data repository (or repository) 110 may include a plurality of abstracted piece of code for refinement and data that is processed, received, or generated as a result of the execution of the plurality of modules in the module(s) 106.
Although the data repository 110 is shown internal to the system 100, it will be noted that, in alternate embodiments, the data repository 110 can also be implemented external to the system 100, where the data repository 110 may be stored within a database (repository 110) communicatively coupled to the system 100. The data contained within such external database may be periodically updated. For example, new data may be added into the database (not shown in FIG. 1) and/or existing data may be modified and/or non-useful data may be deleted from the database. In one example, the data may be stored in an external system, such as a Lightweight Directory Access Protocol (LDAP) directory and a Relational Database Management System (RDBMS). Functions of the components of the system 100 are now explained with reference to the flow diagrams in FIG. 2, FIG. 3, and FIG. 4.
FIG. 2 is a flow diagram depicting steps involved in the process of predicting the withering schedule for tea leaves, using the system of FIG. 1, according to some embodiments of the present disclosure. In an embodiment, the system 100 comprises one or more data storage devices or the memory 104 operatively coupled to the processor(s) 102 and is configured to store instructions for execution of steps of the method 200 by the processor(s) or one or more hardware processors 102. The steps of the method 200 of the present disclosure will now be explained with reference to the components or blocks of the system 100 as depicted in FIG. 1 and the steps of flow diagram as depicted in FIGS. 2, 3, and 4. Although process steps, method steps, techniques or the like may be described in a sequential order, such processes, methods, and techniques may be configured to work in alternate orders. In other words, any sequence or order of steps that may be described does not necessarily indicate a requirement that the steps to be performed in that order. The steps of processes described herein may be performed in any order practical. Further, some steps may be performed simultaneously.
At step 202 of method 200 in FIG. 2, the system 100 receives, via the one or more hardware processors 102, a) a plurality of captured images of a plurality of tea leaves, and b) historical data on cycle-wise withering details of the plurality of tea leaves from a plurality of sources, as input. The historical data is with respect to a plurality of parameters comprising moisture percentage present in the tea leaves at a plurality of time stamps, and associated room temperature and relative humidity, captured over a plurality of past time instances. The system 100 may collect the input automatically, using appropriate input means. For example, the system 100 captures the image of the plurality of tea leaves using a spectral camera. Similarly, information on the room temperature and relative humidity at different timestamps are collected using one or more appropriate sensors deployed at a location where the withering is taking place, and the collected information on the room temperature and relative humidity, and the associated time stamps are communicated to the system 100 via appropriate interface(s) provided by the communication interface 112.
Further, at step 204 of the method 200, the system 100 preprocesses the input data, via the one or more hardware processors 102, to generate a pre-processed data. The preprocessing step maybe carried out to classify and save the input data based on time intervals. Steps that maybe executed as part of the preprocessing of the input data are:
Outlier Identification and T-Interval Calculation: This step identifies significant gaps or jumps in the time series data and calculates a representative time interval (T) between data points, excluding outliers. T interval, which is the median of the time differences between consecutive tuples in the input data. However, this may not be accurate if there are outliers in the data that have very large or very small time differences. So, the system 100 omits the outlier data using appropriate techniques, for example, interquartile range (IQR) method. The IQR is the difference between the 75th percentile and the 25th percentile of a sorted list of values. Any value that is more than 1.5 times the IQR above the 75th percentile or below the 25th percentile is considered an outlier, which are then filtered out from the list of time differences before finding the median. Algorithm for this process is given as:
Input:
data: Time series data with timestamps (Timestamp) in Table
Output:
t_interval: Time interval in secs
outlier_count: Number of outliers
Algorithm:
Validate data:
Check if timestamps are valid (Timestamp)
Ensure at least two data points
Calculate time differences:
Compute differences between consecutive timestamps in the data.
Convert to seconds:
Express time differences in seconds
Calculate percentiles and IQR:
Find 25th and 75th percentiles of time differences (q1 and q3)
Calculate Interquartile Range (IQR) as q3 - q1
Define outlier threshold:
Set threshold as 1.5 times IQR (seconds)
Identify outliers:
Identify timestamps with differences outside the range:
less than q1 - threshold or
greater than q3 + threshold
Calculate outlier count and T-interval:
Count identified outliers (outlier_count)
Find the median of time differences excluding outliers as T-interval (seconds)
Return:
t_interval, outlier_count
Classifying Data based on timestamp for T Intervals: This step takes time series data and applies the T interval to classify each data point based on its proximity to the previous and next data points. This helps to identify meaningful data points within each T interval. At this step, the system 100 uses a function F to perform the classification. For each pair of consecutive tuples, the system 100 compares an absolute difference between respective date-time pairs with n% of the T interval. If the difference is greater than or equal to nT, the system 100 assigns the timestamp of a second tuple as the classified time. Otherwise, the system 100 assigns the date-time of the first tuple as the classified time. Here, n=f(t,H). Pseudocode for this process is given as:
Input:
data: Time series data with timestamps (Timestamp)
T: t_interval in secs
n: Percentage (0-1) for calculating nT threshold
Output:
data: Modified Table with "new_timestamp" column for classified timestamps
Algorithm:
Calculate nT threshold:
nT = T * n
Calculate differences:
For each data point:
Find the difference to the next timestamp ("diff_next")
Find the difference from the previous timestamp ("diff_prev")
Apply classification function (F):
For each data point:
Apply function F to diff_prev, current_timestamp, diff_next, and nT
Store the classified timestamp ("new_timestamp") in the "new_timestamp" column
Remove temporary columns:
Drop "diff_prev" and "diff_next" columns
Return:
Modified data with "new_timestamp" column
Calculation of ‘n’ :
Inputs:
actual_temperature: Room temperature in degrees Celsius
ideal_temperature_min: Minimum ideal temperature (e.g., 20°C)
ideal_temperature_max: Maximum ideal temperature (e.g., 25°C)
actual_relative humidity: Relative humidity in percentage
ideal_relative humidity_min: Minimum ideal relative humidity (e.g., 40%)
ideal_relative humidity_max: Maximum ideal relative humidity (e.g., 60%)
Output:
0 < n < 1
Algorithm:
Normalize temperature:
normalized_temperature = (actual_temperature - ideal_temperature_min) / (ideal_temperature_max - ideal_temperature_min)
Normalize relative humidity:
normalized_relative humidity = (actual_relative humidity - ideal_relative humidity_min) / (ideal_relative humidity_max – ideal_relative humidity_min)
Calculate score using Pythagoras theorem:
n = sqrt((normalized_temperature)2 + (normalized_relative humidity)2)
Determine Timestamp with NA values and flag them: At this step, the system 100 analyzes the classified data and applies a scoring mechanism on each T interval timestamp based on the percentage of missing values (MVC%). If the MVC% exceeds a user-defined threshold, the entire T interval timestamp is flagged as REJECT, suggesting unreliable data within that period else it would be flagged HANDLE. Pseudocode for this process is given as:
Input:
data: Table with "new_timestamp", "m%", "temp" (t) and “relative humidity” (H) columns, potentially with missing values
x: Threshold percentage (0-100) for rejecting cycles with high missing values
Output:
data: Modified Table with an additional "score" column ("REJECT" or "HANDLE")
Algorithm:
Calculate missing values per cycle:
For each data point:
Count the number of missing values in the row ("missing_values")
Define the total number of values per cycle:
Determine the total number of columns in the Table ("total_values")
Calculate the missing value percentage per cycle:
Calculate the percentage of missing values per cycle ("MVC%"):
MVC% = (missing_values / total_values) * 100
Apply scoring mechanism:
For each data point:
If MVC% is greater than x:
Set "score" to "REJECT"
Otherwise:
Set "score" to "HANDLE"
Return:
Modified data with additional column “score”
Handling Missing Data: At this step, the system 100 handles missing values within flagged T intervals. This process involves handling missing M% values with linear interpolation and missing values of temperature (t) and relative humidity (H) with Exponential Moving Average (EMA) based approaches to smoothen the time series. Pseudocode representation of this step is given below:
Input:
data: Table with columns:
new_timestamp: Timestamps classified by C2
temperature: Temperature values
relative humidity: Relative humidity values
score: "REJECT" or "HANDLE" from C3
m: The M% threshold for missing values
alpha: The smoothing factor for EMA (between 0 and 1)
Output:
data: Modified Table with filled “M%”, "temperature" and “relative humidity” column using interpolation and/or EMA
Algorithm:
Select data for handling:
Filter data for rows with "HANDLE" score:
Linear Interpolation for M%:
For each group in handle_data with missing values:
Identify gaps between non-missing values
For each gap:
Find the slope (m) and intercept (c) of the line connecting adjacent non-missing values
Use the linear equation Y = mx + c to interpolate missing values for "M%"
Exponential Moving Average for remaining missing values of temperature (t) and relative humidity (H):
For each group in handle_data with remaining missing values:
For each missing value:
Calculate the EMA using the formula:
EMA_t = (a * Temp) + ((1 - a) * EMA_(t-1))
Where:
EMA_t: Estimated temperature at time t
Temp: Nearest non-missing temperature/ relative humidity value
a: Smoothing factor (between 0 and 1)
EMA_(t-1): Previous EMA value (or first non-missing value)
Merge data:
Combine the processed "HANDLE" data back to the original Table
Return:
Modified data with filled “M%”, "temperature" and “relative humidity” values using interpolation and/or EMA
Further, at step 206 of the method 200, the system 100 estimates, by processing the pre-processed data using a trained moisture percentage estimation model via the one or more hardware processors 102, moisture percentage in the plurality of tea leaves, for a selected time stamp. Various steps involved in the process of training the moisture percentage estimation model are depicted in method 400 in FIG. 4, and are explained hereafter.
At step 402 of the method 400, the system 100, receives a plurality of spectral images of a plurality of reference tea leaves captured using a spectral camera device. In an embodiment, during the training of the moisture percentage estimation model, the system 100 may be considered to be in a training phase. The plurality of spectral images is captured in one or more file formats, for example,.raw, .hdr, and .jpg, with corresponding spectral bands.
At step 404 of the method 400, the system 100 performs preprocessing of the received plurality of spectral images of the plurality of tea leaves to generate a preprocessed spectral data. The system 100 may perform grey-scale conversion, image blurring, and clustering one or more clustering algorithms, as part of the pre-processing of the spectral image. The grey-scale conversion assigns pixel values in the captured image to grey-scale values where each pixel is described by a single value, typically ranging from 0 (black) to 255 (white). For smoothening of grey-scaled image, Gaussian blur maybe used which reduces high-frequency details and transitions, making the image look softer.
Further, at step 406 of the method 400, the system 100 extracts a foreground data from the preprocessed data. The foreground data includes a plurality of target areas of the plurality of reference tea leaves in the plurality of spectral images. Major part of each of the plurality of spectral images may be of areas surrounding the reference tea leaves, hence processing the entire image maybe unnecessary and adds to computational overhead. To avoid this, the foreground extraction is performed, such that the foreground data includes images of the plurality of reference tea leaves. The system 100 may cluster the grey-scaled blurred image using a Clustering algorithm-1 for an initial segmentation. Here, K = 2 in algorithm means clustering the sea of pixel associated with tea leaves as one cluster and other cluster to sea of pixel associated to surface. Further, the system 100 may use a Clustering algorithm-2, where a circular mask is created around the image to isolate the central region from noisy edges. The mask is black where the pixels are to be excluded and white where the pixels are to be included. Only the pixels within the mask are used for Clustering algorithm-2. This means the clustering only affects the central area of the image and ignores the potentially noisy edges. The radius of the mask is automatically calculated as half the minimum dimension of the image. This ensures the mask fits entirely within the image and doesn't accidentally exclude a portion of the central region. This process removes the additional noise (highly sensitive areas) created by the camera at the edges of the image.
Further, at step 408 of the method 400, the system 100 transforms the foreground data of each of the plurality of tea leaves in a an image matrix.
Further, at step 410 of the method 400, the system 100 extracts one or more bounding boxes from the image matrix using a bounding box classifier. Pseudocode of the process used by the bounding box classifier for extracting the one or more bounding boxes is given below:
Input:
`image`: Segmented image (BImin=0, BImax=1)
`window_size`: Size of sliding window (width, height)
Output:
bounding_boxes`: List of bounding boxes for detected objects
1. Pad image:
Add padding to image with `window_size // 2` on each side
Use constant value of 0 for padding
2. Iterate over image:
For each pixel at (x, y):
Check if within image boundaries:
x >= window_size[0] and x < image_width - window_size[0]
y >= window_size[1] and y < image_height – window_size[1]
3. Extract window:
Define window coordinates:
x_min = x - window_size[0] // 2
y_min = y - window_size[1] // 2
x_max = x + window_size[0] // 2
y_max = y + window_size[1] // 2
Extract window from padded image:
window = padded_image[y_min:y_max+1, x_min:x_max+1]
4. Check window pixels:
Check if all pixels in the window have value BImax (1)
5. Detect object:
If all pixels are BImax:
Add bounding box to list:
bounding_boxes.append(((x_min, y_min), (x_max, y_max)))
6. Return bounding boxes:
Return `bounding_boxes`
Further, at step 412 of the method 400, the system 100 estimates mean of pixels value associated with each of the plurality of reference tea leaves, for each of a plurality of bands in a spectral data, by overlaying each of the one or more bounding boxes on the spectral image of each of the plurality of reference tea leaves. Further, at step 414 of the method 400, the system 100 maps the mean of pixels values with a moisture percentage. Further, at step 416 of the method 400, the system 100 trains the moisture percentage estimation model with the mean of pixel values (mean pixel values are alternately referred to as reflectance values) and the mapped moisture percentage, to generate the trained moisture percentage estimation model. In this training process, the pixel values against features bands and M% from traditional methods (ovendry method etc.) are used for training the moisture percentage estimation model. Here, supervised learning based regression model is used to estimate the moisture and the estimated model weights are saved for estimation. In this process, the system 100 performs automatic identification of target spectral signature in each of the plurality of spectral images of the plurality of reference tea leaves, i.e., the tea leaves under test, and estimates moisture percentage from the associated target spectral signatures where the spectral signature comprising reflectance values across multiple wavelength bands. This process involves:
Spectral Sample Amplification
Moisture % estimation
These steps are further explained below:
Spectral Sample Amplification :
In a withering setup, when tea leaves for withering comes in withering trough, a spectral device mounted overhead of trough under specific conditions (closed lighted area) is activated, and. Here, one of the spectral bands .jpg file goes under a series of pre-processing operations which may include grey-scale conversion, image blurring, and one or more clustering algorithms.
The grey-scale conversion assigns pixel values in the captured image to grey-scale values where each pixel is described by a single value, typically ranging from 0 (black) to 255 (white). For smoothening of grey-scaled image, Gaussian blur maybe used which reduces high-frequency details and transitions, making the image look softer.
To separate the tea leaves (foreground) from surface (background), we cluster the grey-scaled blurred image using a Clustering algorithm-1 for initial segmentation. Here, K = 2 in algorithm means clustering the sea of pixel associated with tea leaves as one cluster and other cluster to sea of pixel associated to surface.
Clustering algorithm-2 is used where a circular mask is created around the image to isolate the central region from noisy edges. The mask is black where we want to exclude pixels and white where we want to include them. Only the pixels within the mask are used for Clustering algorithm-2. This means the clustering only affects the central area of the image and ignores the potentially noisy edges. The radius of the mask is automatically calculated as half the minimum dimension of the image. This ensures the mask fits entirely within the image and doesn't accidentally exclude a portion of the central region. This process removes the additional noise (highly sensitive areas) created by the camera at the edges of the image.
The output of above mentioned series processing operations come out as image matrix having binary details. Furthermore, Bounding Box Classifier is used for bounding box extraction over image matrix. It evaluates whether the content within the current window matches the object being sought (here, one of the binary data for tea leaves region over image is taken into consideration) or not.
Pseudo Code for Bounding Box Classifier for Segmented Image
Input:
`image`: Segmented image (BImin=0, BImax=1)
`window_size`: Size of sliding window (width, height)
Output:
bounding_boxes`: List of bounding boxes for detected objects
1. Pad image:
Add padding to image with `window_size // 2` on each side
Use constant value of 0 for padding
2. Iterate over image:
For each pixel at (x, y):
Check if within image boundaries:
x >= window_size[0] and x < image_width - window_size[0]
y >= window_size[1] and y < image_height – window_size[1]
3. Extract window:
Define window coordinates:
x_min = x - window_size[0] // 2
y_min = y - window_size[1] // 2
x_max = x + window_size[0] // 2
y_max = y + window_size[1] // 2
Extract window from padded image:
window = padded_image[y_min:y_max+1, x_min:x_max+1]
4. Check window pixels:
Check if all pixels in the window have value BImax (1)
5. Detect object:
If all pixels are BImax:
Add bounding box to list:
bounding_boxes.append(((x_min, y_min), (x_max, y_max)))
6. Return bounding boxes:
Return `bounding_boxes`
The extracted regions/pixel locations from the classifier is saved and used for further analysis. These pixel locations are superimposed on .raw file (as file shape maybe .jpg), which gives a hypercube information present in extracted pixel locations in terms of reflectance values against specified spectral bands. These reflectance values are in numerical form. The mean value of all reflectance values over different pixel locations in one image, is calculated and maybe saved. Further, dimensionality reduction is done using appropriate technique such as PCA (Principle Component Analysis) to make the process less computationally complex. As output of this process, relevant features bands are generated.
Terms used are:
BI_min: Minimum threshold for segmented object
BI_max: Maximum threshold for segmented object
Bounding Box Classifier: Sliding Window Classifier (Determine Fixed sized Bounding Box for BI max over the Image matrix)
Moisture percentage (%) estimation:
The reflectance values against features bands and moisture percentage (which may have been obtained using traditional methods such as oven-dry method) are used for training the moisture percentage estimation model. Supervised learning based regression model maybe used to estimate the moisture % and the estimated model weights are then saved in an associated database for estimation.
Further, at step 208 of the method 200, the system 100 predicts, by a trained time series forecasting model via the one or more hardware processors 102, a withering schedule for the plurality of tea leaves, based on a) the estimated moisture percentage, b) current temperature value, c) current relative humidity value, and d) a current timestamp.
Pseudo Code for generating the withering schedule is given below. It is to be noted that the pseudocode refers to use of specific neural network models such as SAN_Transformer. However, this is for explanation purpose, but in actual implementation this maybe done using any other suitable RNN/LSTM/any other neural network models.
Define Input Parameters:
Input_sequences: A sequence of input features (room temperature, relative humidity, and moisture percentages of tea leaves) with shape (batch_size, sequence_length, input_size)
Here, sequence_length is defined by the number of time-stamps at (t_interval) in each day. For example, if t_interval = 1800 sec, then the total sequence length for each day starts at 11:00 a.m. and ends at 11:30 p.m. is 25.
Batch size is defined as the collective number of sequence lengths (number of days). For example, if you forecast 1 year of data and have 10 years of historical data, the batch size is 365.
Input_size defined total data in the database. (25 *365*10=91250)
In the embodiments disclosed herein, the forecasting is done only for each day of data. Hence, batch size used is 25. Hence the total shape based on the above example is (25, 25, 91250). Calculation of n value used herein is elaborated in step 204 of the method 200.
initial_moisture: The initial moisture percentage for the current day, with shape (batch_size, 1)
Define Output Parameter:
forecasted_moisture: The forecasted moisture percentages for the entire day, with shape (batch_size, output_sequence_length)
Define Model:
SAN_Transformer:
Input: input_size, output_size, num_layers, num_heads, dropout
Initialize:
encoder = TransformerEncoder(
TransformerEncoderLayer(input_size, num_heads, dropout),
num_layers
)
decoder = Linear(input_size, output_size)
Forward:
combined_input = Concatenate(input_sequences, initial_moisture)
encoded_representations = encoder(combined_input)
mean_representation = Mean(encoded_representations, dim=1)
output = decoder(mean_representation)
return output
Define Training Function:
Input: model, train_loader, val_loader, epochs, learning_rate, device
Move model to device
Initialize loss function (MSELoss)
Initialize optimizer (Adam with model parameters and learning_rate)
for epoch in range(epochs):
Initialize train_loss and val_loss
# Training Loop
Set model to train mode
for input_sequences, initial_moisture, targets in train_loader:
Move input_sequences, initial_moisture, and targets to device
Zero out gradients
outputs = model(input_sequences, initial_moisture)
loss = loss_function(outputs, targets)
Backpropagate loss
Update model parameters using optimizer
Accumulate train_loss
# Validation Loop
Set model to evaluation mode
with no_grad():
for input_sequences, initial_moisture, targets in val_loader:
Move input_sequences, initial_moisture, and targets to device
outputs = model(input_sequences, initial_moisture)
loss = loss_function(outputs, targets)
Accumulate val_loss
Compute average train_loss and val_loss
Print epoch, train_loss, and val_loss
return trained_model
# Example Usage
Define input_size, output_size, sequence_length, output_sequence_length, num_layers, num_heads, learning_rate, epochs, device
Assuming train_loader and val_loader are available, providing input_sequences, initial_moisture, and targets
#Initialize model
Call Training Function:
trained_model = train(
model,
train_loader,
val_loader,
epochs,
learning_rate,
device
)
# Use trained_model for inference
forecasted_moisture = trained_model(input_sequences, initial_moisture)
This pseudocode is further explained below:
Define Input Parameters:
input_sequences: A sequence of input features (room temperature, relative relative humidity, and moisture percentages) with shape (batch_size, sequence_length, input_size).
initial_moisture: The initial moisture percentage for the current day, with shape (batch_size, 1).
Define Output Parameter:
forecasted_moisture: The forecasted moisture percentages for the entire day, with shape (batch_size, output_sequence_length).
Define Model (SAN_Transformer):
Initialize the model with input parameters: input_size, output_size, num_layers, num_heads, and dropout.
Create the encoder as a TransformerEncoder with num_layers of TransformerEncoderLayer instances, each with input_size and num_heads.
Create the decoder as a linear layer that takes the encoded representations and maps them to the output size.
In the forward method:
Concatenate the input_sequences and initial_moisture to form the combined input.
Pass the combined input through the encoder to obtain encoded representations.
Take the mean of the encoded representations along the sequence dimension.
Pass the mean representation through the decoder to generate the output.
Define Training Function:
Input: model, train_loader, val_loader, epochs, learning_rate, and a device.
Move the model to the specified device.
Initialize the loss function as MSELoss.
Initialize the optimizer as Adam with the model parameters and the specified learning rate.
For each epoch:
Initialize train_loss and val_loss.
Training Loop:
Set the model to train mode.
For each batch of input_sequences, initial_moisture, and targets in the train_loader:
Move the inputs and targets to the specified device.
Zero out the gradients.
Compute the model outputs using model(input_sequences, initial_moisture).
Calculate the loss between the outputs and targets using the loss function.
Backpropagate the loss.
Update the model parameters using the optimizer.
Accumulate the train_loss.
Validation Loop:
Set the model to evaluation mode.
With no gradient computation:
For each batch of input_sequences, initial_moisture, and targets in the val_loader:
Move the inputs and targets to the specified device.
Compute the model outputs using model(input_sequences, initial_moisture).
Calculate the loss between the outputs and targets using the loss function.
Accumulate the val_loss.
Compute the average train_loss and val_loss.
Print the epoch, train_loss, and val_loss.
Return the trained model.
Example Usage:
Define the required parameters: input_size, output_size, sequence_length, output_sequence_length, num_layers, num_heads, learning_rate, epochs, and device.
Assuming train_loader and val_loader are available, providing input_sequences, initial_moisture, and targets.
Initialize the model.
Call the train function with the model, data loaders, epochs, learning rate, and device.
Obtain the trained model.
Inference:
Use the trained model for inference by passing input_sequences and initial_moisture to obtain forecasted_moisture.
In an embodiment, the predicted withering schedule is fine-tuned, via the one or more hardware processors 102. Steps involved in the process of fine-tuning the predicted withering schedule are depicted in method 300 in FIG. 3, and are explained hereafter. At step 302 of the method 300, the system 100 performs a course correction of impact of deviation in one or more ambient parameters on the prediction of the withering schedule. Further, at step 304 of the method 300, the system 100 determines whether impact of the course correction on remaining withering schedule is within a defined threshold. Further, at step 306 of the method 300, the system 100 re-schedules a latest withering schedule if the impact is determined as exceeding the defined threshold.
In an embodiment, the system 100 optimizes the withering schedule based on a minimum number (if any) of checks to be performed during a cycle for sampling the tea leaves. Here, current temperature (t), current relative humidity (H) data is coming from respective sensors at regular intervals I, which is subset of the T interval. Current timestamp is processed further using Sigmoid function (F) as:
F(t_(-1),t_0,t_(+1) )={¦(t_(+1),if|t_(-1)-t_0 |=nT,@t_(-1),if|t_(-1)-t_0 |= 0.5 && ABSD_H >= 5):
CZS = 0
elif (ABSD_t >= 0.5 && ABSD_H <= 5):
CZS = 0
elif (ABSD_t <= .5 && ABSD_H >= 5):
CZS= 0
C_time = C_time + I interval
Rewards calculation:
Input:
xA = Current airflow rate (0 to 3 m3/min per kg)
xt = ABSD_t (0 to 8 )°C
xH = ABSD_H (0 to 100)%
Output:
Reward value as per definition is obtained as output, and is represented as:
R(x)=F(x_a )+F(x_t )+F(x_h )
where,
R(x) is a cumulative reward function that individually calculates rewards based on airflow F(x_a ), temperature F(x_t ), and relative humidity F(x_h ), where x_a, x_t, x_h are real numbers.
ForComfortZoneScore(CZS=0),
F(x_a )= {¦(-2(0.25-x_a )ifx_a<0.25@0if0.25=x_a=2.75@-2(x_a-2.75)ifx_a>2.75)¦
F(x_t )= {¦(-4(x_t-0.5)ifx_t>0.5@0ifx_t=0.5)¦
F(x_h )= {¦(-2(x_h-5)ifx_h>5@0ifx_h=5)¦
ForComfortZoneScore(CZS=1),
F(x_a )= {¦(?(x_a-0.25)/0.25?if0.25=x_a=1.0@??2.75-x?_a/0.25?if2.0=x_a=2.75@0)¦
F(x_t )=?(5-x_t)/0.1?
F(x_h )=0.5?(5-x_h)/1?
2. Weather State Representation: A method/function which does mapping from states (t, H) to actions (modify Airflow rate (A)) in a continuous action space.
Input:
C_H and C_t at I interval
P_H and P_t at T interval
CZS
Airflow rate (A)
C_time (I interval time)
T_time (Target Time: Predicted time at which target M% is scheduled to happen from scheduler)
Output: Airflow rate
Pseudocode for Weather State Representation:
while( C_time <= T_time):
if C_time<= 2*(T intervals) :
if (ABSD_t <= 0.5 & ABSD_H <= 5):
Maintain current airflow rate
elif (ABSD_t >= 0.5 & ABSD_H >= 5):
Use weather agent to adjust airflow rate to bring the parameter back within the range
elif (ABSD_t >= 0.5 & ABSD_H <= 5):
Use weather agent to adjust airflow rate to bring the parameter back within the range
elif (ABSD_t <= 0.5 & ABSD_H >= 5):
Use weather agent to adjust airflow rate to bring the parameter back within the range
else:
//If fails after multiple attempts at T interval
Reset Forcasting model with current data and re-predict target time.
Adjust airflow rate based on new prediction if necessary.
C_time = C_time + I interval
// Target time reached, shutdown process
SHUTDOWN_PROCESS()
3. RL Agent : An Actor-Critic based deep neural networks which works on the principle of Reinforcement Learning is used.
Input :
Rewards (goes into Critics network)
CZS (goes into Actor network)
Range of airflow between ‘0’ and ‘3’ m3/min per kg (goes into Actor network)
Output:
Rate of change of Airflow
Airflow rate
Exit condition for Weather Decision Block:
When C_time > T_time
Course correction due to ambient parameters deviation runs atmost 2*(T interval) time from the C_time.
If after 2*(T interval) time, the deviations in ambient parameters still not corrected, the user is prompted to check current M% (C_M%) value.
S_time (Start time)
C_time (current time)
T_time (Target Time (calculate as per predicted series for the day))
PT_time = Predicted Target Time (time to reach the target M% from start of withering or after reset of Scheduler Rectifier)
CT_time = Current Target time
NOTE:
ABSD = Absolute Difference
C_time = Current time
PT_time = Predicted Target time
C_t = Current Temperature
P_t = Predicted temperature
C_H = Current Relative humidity
P_H = Predicted Relative humidity
C_M% = Current Moisture %
P_M% = Predicted Moisture%
Steps for course correction mechanism are given below:
1. Check current moisture % (C_M%) and set the Target moisture % (T_M%):
A user input with respect to T_M% value is received, which is based on the C_M% value
2. Determine Predicted Target time (PT_time):
PT_time value is obtained from predicted timeseries based on P_M%, P_H, P_t.
PT_time = |T_time – S_time|
3. Weather Decision block start running at I interval:
Determining absolute difference values of relative humidity (represented as ABSD_H) and temperature (represented as ABSD_t) from respective sensors and predicted timeseries data.
A weather module W (which may be part of the system 100 in implementation, not shown) takes ABSD_t, ABSD_H, and default airflow rate as inputs and calculates Comfort Zone Score (CZS) and gives CZS as output.
RL Agent takes the CZS value and default initial ambient parameters and airflow rate as input and gives update on a weather state data generated by a weather state representation module in an example implementation of the system 100 (not shown).
Further, at any interval I, based on the calculated values, modification or change in airflow rate is obtained as per a current state.
Rewards are calculated in the weather module W once RL agent makes decision based on CZS, airflow, current temperature, and current relative humidity values.
These rewards at I interval are stored in RL agent for further computation.
4. If step 3 fails after course correction for 2T interval or C_time == T_time (Check Exit conditions for Weather Decision Block), go to step 5.
5. Check C_M% from moisture % estimation model.
6. Determine CT_time from new predicted timeseries based on C_M%.
CT_time = |T_time – C_time|
7. Calculations for Time Decision Block:
Determine new Predicted Target Time (based on new predicted timeseries) : n_PT_time = |CT_time – C_time|
Calculate Time Threshold:
Time Threshold = +- 5% of n_PT_time
Time Threshold Lower value (LT)= PT_time – Time Threshold
Time Threshold Upper value (UT)= PT_time + Time Threshold
Time Threshold Range = LT to UT
8. Time Threshold Test:
If LT <= CT_time <= UT:
Within range means trigger Weather Decision Block
else:
Outside range means retrain Forecasting model with current data (C_M%, C_H, C_t)
The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims.
The embodiments of present disclosure herein address unresolved problem of withering schedule prediction of tea leaves. The embodiment, thus provides a mechanism for withering schedule prediction of tea leaves based on a moisture percentage predicted using a spectral analysis. Moreover, the embodiments herein further provide a mechanism for fine-tuning of the predicted withering schedule.
It is to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g., any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g., hardware means like e.g., an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g., an ASIC and an FPGA, or at least one microprocessor and at least one memory with software processing components located therein. Thus, the means can include both hardware means and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software means. Alternatively, the embodiments may be implemented on different hardware devices, e.g., using a plurality of CPUs.
The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various components described herein may be implemented in other components or combinations of other components. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
It is intended that the disclosure and examples be considered as exemplary only, with a true scope of disclosed embodiments being indicated by the following claims.
, Claims:
1. A processor implemented method, comprising:
receiving (202), via one or more hardware processors, a) a plurality of captured images of a plurality of tea leaves, and b) historical data on cycle-wise withering details of the plurality of tea leaves from a plurality of sources, as input, wherein the historical data is with respect to a plurality of parameters comprising moisture percentage present in the tea leaves at a plurality of time stamps, and associated room temperature and relative humidity;
preprocessing (204) the input data, via the one or more hardware processors, to generate a pre-processed data;
estimating (206), by processing the pre-processed data using a trained moisture percentage estimation model via the one or more hardware processors, a moisture percentage in the plurality of tea leaves, for a selected time stamp; and
generating (208), by a trained time series forecasting model via the one or more hardware processors, a withering schedule for the plurality of tea leaves, based on a) the estimated moisture percentage, b) current temperature value, c) current relative humidity value, and d) a current timestamp.
2. The processor implemented method as claimed in claim 1, wherein the generated withering schedule is fine-tuned, via the one or more hardware processors, wherein the fine-tuning comprising:
performing (302) a course correction of impact of deviation in one or more ambient parameters on the withering schedule;
determining (304) whether impact of the course correction on remaining withering schedule is within a defined threshold; and
regenerating (306) the withering schedule if the impact is determined as exceeding the defined threshold.
3. The processor implemented method as claimed in claim 1, wherein the moisture percentage estimation model is trained to estimate the moisture percentage, comprising:
receiving (402) a plurality of spectral images of a plurality of reference tea leaves captured using a spectral camera device;
preprocessing (404) the received plurality of spectral images of the plurality of reference tea leaves to generate a preprocessed spectral data;
extracting (406) a foreground data from the preprocessed spectral data, wherein the foreground data comprises of a plurality of target areas of the plurality of reference tea leaves in the plurality of spectral images;
transforming (408) the foreground data of each of the plurality of reference tea leaves to an image matrix;
extracting (410) one or more bounding boxes from the image matrix using a bounding box classifier;
estimating (412) mean of pixel values associated with each of the plurality of reference tea leaves, for each of a plurality of bands in the preprocessed spectral data, by overlaying each of the one or more bounding boxes on the spectral image of each of the plurality of reference tea leaves;
mapping (414) the mean of pixels values with a moisture percentage; and
training (416) the moisture percentage estimation model with the mean of pixels values and the mapped moisture percentage, to generate the trained moisture percentage estimation model.
4. A system (100), comprising:
one or more hardware processors (102);
a communication interface (112); and
a memory (104) storing a plurality of instructions, wherein the plurality of instructions cause the one or more hardware processors to:
receive a) a plurality of captured images of a plurality of tea leaves, and b) historical data on cycle-wise withering details of the plurality of tea leaves from a plurality of sources, as input, wherein the historical data is with respect to a plurality of parameters comprising moisture percentage present in the tea leaves at a plurality of time stamps, and associated room temperature and relative humidity;
preprocess the input data to generate a pre-processed data;
estimate, by processing the pre-processed data using a trained moisture percentage estimation model, a moisture percentage in the plurality of tea leaves, for a selected time stamp; and
generate, by a trained time series forecasting model, a withering schedule for the plurality of tea leaves, based on a) the estimated moisture percentage, b) current temperature value, c) current relative humidity value, and d) a current timestamp.
5. The system as claimed in claim 4, wherein the one or more hardware processors are configured to fine-tune the generated withering schedule, by:
performing a course correction of impact of deviation in one or more ambient parameters on the withering schedule;
determining whether impact of the course correction on remaining withering schedule is within a defined threshold; and
regenerating the withering schedule if the impact is determined as exceeding the defined threshold.
6. The system as claimed in claim 4, wherein the one or more hardware processors are configured to train the moisture percentage estimation model to estimate the moisture percentage, by:
receiving a plurality of spectral images of a plurality of reference tea leaves captured using a spectral camera device;
preprocessing the received plurality of spectral images of the plurality of reference tea leaves to generate a preprocessed spectral data;
extracting a foreground data from the preprocessed spectral data, wherein the foreground data comprises of a plurality of target areas of the plurality of reference tea leaves in the plurality of spectral images;
transforming the foreground data of each of the plurality of reference tea leaves to an image matrix;
extracting one or more bounding boxes from the image matrix using a bounding box classifier;
estimating mean of pixel values associated with each of the plurality of reference tea leaves, for each of a plurality of bands in the preprocessed spectral data, by overlaying each of the one or more bounding boxes on the spectral image of each of the plurality of reference tea leaves;
mapping the mean of pixels values with a moisture percentage; and
training the moisture percentage estimation model with the mean of pixels values and the mapped moisture percentage, to generate the trained moisture percentage estimation model.
| # | Name | Date |
|---|---|---|
| 1 | 202421024483-STATEMENT OF UNDERTAKING (FORM 3) [27-03-2024(online)].pdf | 2024-03-27 |
| 2 | 202421024483-REQUEST FOR EXAMINATION (FORM-18) [27-03-2024(online)].pdf | 2024-03-27 |
| 3 | 202421024483-FORM 18 [27-03-2024(online)].pdf | 2024-03-27 |
| 4 | 202421024483-FORM 1 [27-03-2024(online)].pdf | 2024-03-27 |
| 5 | 202421024483-FIGURE OF ABSTRACT [27-03-2024(online)].pdf | 2024-03-27 |
| 6 | 202421024483-DRAWINGS [27-03-2024(online)].pdf | 2024-03-27 |
| 7 | 202421024483-DECLARATION OF INVENTORSHIP (FORM 5) [27-03-2024(online)].pdf | 2024-03-27 |
| 8 | 202421024483-COMPLETE SPECIFICATION [27-03-2024(online)].pdf | 2024-03-27 |
| 9 | 202421024483-FORM-26 [20-05-2024(online)].pdf | 2024-05-20 |
| 10 | Abstract1.jpg | 2024-05-21 |
| 11 | 202421024483-Proof of Right [24-07-2024(online)].pdf | 2024-07-24 |
| 12 | 202421024483-POA [26-02-2025(online)].pdf | 2025-02-26 |
| 13 | 202421024483-FORM 13 [26-02-2025(online)].pdf | 2025-02-26 |
| 14 | 202421024483-Power of Attorney [25-03-2025(online)].pdf | 2025-03-25 |
| 15 | 202421024483-Form 1 (Submitted on date of filing) [25-03-2025(online)].pdf | 2025-03-25 |
| 16 | 202421024483-Covering Letter [25-03-2025(online)].pdf | 2025-03-25 |
| 17 | 202421024483-FORM-26 [16-04-2025(online)].pdf | 2025-04-16 |