Abstract: A method for monitoring a fluid level is provided. The method includes receiving acoustic signals that are generated when the fluid is being filled in a holding object (110), extracting one or more acoustic features from the received acoustic signals, and providing one or more inputs including the extracted acoustic features to a machine-learning system (128) having a machine-learning model (130). The fluid level in the holding object is monitored using the machine-learning model (130) based on the extracted acoustic features and training data that is used to train the machine-learning model (130). The training data includes the acoustics features corresponding to various levels of the fluid in at least one type of holding object (110). The fluid level in the holding object (110) is automatically controlled to be within a desired range based on the monitoring information.
SYSTEM AND METHOD FOR MONITORING A FLUID LEVEL
BACKGROUND
[0001] Embodiments of the present specification relate generally to a system and
method for monitoring a level of a fluid in a holding object, and more particularly to a system and method for monitoring a level of a fluid using acoustic signals.
[0002] Monitoring a liquid level is of great significance in many industrial and
domestic applications. For example, in an oil industry application, pumping oil into a storage tank needs to be monitored to prevent oil spillage. In an automotive industry application, a level of a fuel in a vehicle fuel tank is monitored to check if a desired volume of fuel is refilled in the tank or not. In a household application, a water level in a container is monitored to control the flow of water automatically, and thus, preventing the water overflow from the container.
[0003] Conventional liquid level monitoring systems use a wide variety of sensors
to monitor the liquid level. For example, a conventional liquid level monitoring system uses an ultrasonic sensor to monitor the liquid level in the container. The ultrasonic sensor can be of an invasive type or a non-invasive type. The invasive type of ultrasonic sensor is placed within the container for monitoring the liquid level, whereas the non-invasive type ultrasonic sensor is mounted on an external surface (e.g., an outer wall) of the container.
[0004] The ultrasonic sensor emits ultrasound signals that are transmitted onto the
liquid surface situated in the container and uses reflected signals from the liquid surface to monitor the liquid level within the container. Since the invasive type ultrasonic sensor is exposed to the contents of the container, reliability and performance of the sensor are greatly reduced over time. The non-invasive type ultrasonic sensor may be
2
expensive and may require frequent calibration that adds additional cost to the liquid level monitoring system.
[0005] Another conventional liquid level monitoring system uses a pressure sensor
to monitor the liquid level in the container. The pressure sensor operates on the principle of a hydrostatic pressure. The hydrostatic pressure inside a liquid container is linearly proportional to the level of the liquid in the container. Hence, the liquid level in the container can be determined by measuring the fluid pressure in the container. Most conventional pressure sensors are made up of a silicone elastic type material. When a load is applied on such pressure sensors for measuring the liquid level in the container, the silicone elastic type material located between electrodes of the pressure sensors deforms. Deforming of the silicone elastic type material causes the pressure sensors to provide different outputs for the same load over a period.
[0006] Another type of sensor typically used for the liquid level monitoring is an
electrical sensor. The electrical sensor operates on the principle of electrical conductivity for measuring a level of conductive liquid such as an inflammable liquid within a tank. The electrical sensor includes two electrodes. The metal wall of the tank acts as a first electrode and an electrical probe is inserted into the tank. When the conductive liquid is not in connection with the electrical probe, the electrical resistance is relatively high between the probe and the metal tank wall. As the level of the liquid raises within the tank, the electrical resistance between the probe and the metal tank wall proportionately decreases. The electrical sensor is typically used to track the level of inflammable liquid within a tank and its usage is not considered safe for household purposes like monitoring a water level in a water filtration container.
[0007] Apart from the above-mentioned sensors, there are also other types of
sensors such as a float sensor, a capacitance sensor, an optical sensor, a radar sensor, a wireless sensor, etc. that are used for the liquid level monitoring. However, such sensors are also expensive. Accordingly, there remains a need for a cost effective
3
system and method that avoids usage of expensive sensors for monitoring fluid levels in a holding object.
BRIEF DESCRIPTION
[0008] According to an exemplary aspect of the present specification, a method for
monitoring a fluid level is provided. The method includes receiving acoustic signals that are generated when the fluid is being filled in a holding object, extracting one or more acoustic features from the received acoustic signals, and providing one or more inputs including the extracted acoustic features to a machine-learning system having a machine-learning model. The fluid level in the holding object is monitored using the machine-learning model based on the extracted acoustic features and training data that is used to train the machine-learning model. The training data includes the acoustics features corresponding to various levels of the fluid in at least one type of holding object. The fluid level in the holding object is automatically controlled to be within a desired range based on the monitoring information.
[0009] The one or more acoustic features include one or more mel-frequency
cepstral coefficients, spectral energy, spectral entropy, spectral flatness, and spectral flux. Values associated with the extracted acoustic features may vary based on a fluid level in the holding object. Acoustic signals that are generated when the fluid is being filled to one or more selected levels in different types of the holding object may be received. The different types of the holding object may include different shapes, sizes, volumes, and constituent materials corresponding to the holding object. Each of the received acoustic signals may be fragmented into a plurality of acoustic frames. Each acoustic frame may be of a designated frame size. The one or more acoustic features may be extracted from each of the acoustic frames. The training data may be generated to train the machine-learning model based on the fragmented acoustic frames. The
4
training data may include one or more sets of extracted acoustic features and a fluid level corresponding to each of the sets.
[0010] Values of the extracted acoustic features in the training data may be
normalized to be within a selected range to obtain normalized training data comprising normalized acoustic features. The normalized values associated with at least one type of acoustic feature selected from the extracted acoustic features may be fitted to a smooth curve using a spline function to obtain a smooth splined acoustic feature. The normalized training data may be updated to smooth splined training data based on the smooth splined acoustic feature. The machine-learning model may be generated based on the smooth splined training data.
[0011] The fragmented acoustic frames and the acoustic features corresponding to
the acoustic frames may be segmented into training data and validation data. The training data may be used to generate the machine-learning model. The validation data may be used to validate the generated machine-learning model. The training data may be segmented into at least first training dataset and second training dataset. The first training dataset includes sets of acoustic features that may be used to monitor the fluid level in the holding object in a first range. The second training dataset includes other sets of acoustic features that may be used to monitor the fluid level in the holding object in a second range.
[0012] The generated machine-learning model includes a machine-learning
algorithm that may be trained with both the first training dataset and the second training dataset to monitor the fluid level in the first range and the second range. The generated machine-learning model includes a first machine-learning algorithm that may be trained with the first training dataset to monitor the fluid level in the first range and a second machine-learning algorithm that may be trained with the second training dataset to monitor the fluid level in the second range. First validation dataset including a set
5
of acoustic features that corresponds to a fluid level in the first range may be provided as an input to the generated machine-learning model.
[0013] A difference between the fluid level identified by the generated machine-
learning model based on the first validation dataset and an actual fluid level may be determined and identified if the difference is within a first defined threshold value. A performance of the generated machine-learning model in identifying the fluid level in the first range may be validated based on the determined difference. Second validation dataset including a set of acoustic features that corresponds to a fluid level in the second range may be provided as an input to the generated machine-learning model. A difference between the fluid level identified by the generated machine-learning model based on the second validation dataset and an actual fluid level may be determined and identified if the difference is within a second defined threshold value. The second defined threshold value may be lesser than the first defined threshold value. A performance of the generated machine-learning model in identifying the fluid level in the second range may be validated based on the determined difference.
[0014] The training data may be updated by fitting normalized values associated
with another acoustic feature that is different from a previously selected acoustic feature to a smooth curve using a spline function to obtain updated training data having an updated smooth splined acoustic feature. The update of the training data may be performed when the performance of the generated machine-learning model in identifying the fluid level in at least one of the first range and the second range is not acceptable. An updated machine-learning model may be generated based on the updated training data.
[0015] The generated machine-learning model may be updated by changing a
previously used machine-learning algorithm to monitor the fluid level in the first range by another machine-learning algorithm when the performance of the previously used machine-learning algorithm is not acceptable in identifying the fluid level in the first
6
range. The generated machine-learning model may be updated by changing a previously used machine-learning algorithm to monitor the fluid level in the second range by another machine-learning algorithm when the performance of the previously used machine-learning algorithm is not acceptable in identifying the fluid level in the second range.
[0016] Values associated with the extracted acoustic features may be normalized
to be within a selected range to obtain normalized acoustic data. The normalized values associated with a specific type of acoustic feature that is smooth splined during generation of the machine-learning model may be fitted to a smooth curve using a spline function to obtain smooth splined acoustic data. The smooth splined acoustic data may be provided as the input to the machine-learning model to monitor the fluid level in the holding object. A fluid inflow into the holding object may be stopped when the fluid level in the holding object reaches a desired level.
[0017] According to another exemplary aspect of the present specification, a
system for monitoring a fluid level is provided. The system includes an acoustic signal-receiving device and a processing device that is operatively coupled to the acoustic signal-receiving device. The acoustic signal-receiving device receives one or more acoustic signals that are generated when a fluid is being filled in a holding object. The processing device includes an acoustic signal processing system and a machine learning system having a machine-learning model. The acoustic signal processing system is configured to extract one or more acoustic features from the received acoustic signals. The machine-learning model is configured to monitor the fluid level in the holding object based on the extracted acoustic features and training data that is used to train the machine-learning model. The training data includes the acoustics features corresponding to various levels of the fluid in at least one type of the holding object.
[0018] The acoustic feature processing system may be further configured to
normalize values associated with the extracted acoustic features to be within a selected
7
range to obtain normalized acoustic data. Normalized values associated with a specific type of acoustic feature that is smooth splined during generation of the machine-learning model may be fitted to a smooth curve using a spline function to obtain smooth splined acoustic data. The smooth splined acoustic data may be provided as an input to the machine-learning model to monitor the fluid level in the holding object. The system may further include an acoustic chamber inside which the acoustic signal-receiving device is placed. The acoustic chamber may be a noise-proof chamber that prevents undesired acoustic signals in the surrounding environment from entering into the acoustic chamber. The acoustic signal-receiving device may be a microphone.
[0019] The system further includes a fluid flow control device that is
communicatively coupled to the processing device and is configured to control a fluid
inflow into the holding object based on the monitored information. The system may
include a home water dispensing unit, a beverage dispensing unit, a fuel dispensing
unit, a fluid storage unit, a water treatment and purification unit, and a chemical
treatment unit. The system may further include a communication unit
communicatively coupled to a remote device that may be configured to control the fluid level in the holding object. The system further includes an output device communicatively coupled to one or more of the processing device, the communication unit, and the remote device. The communication unit is configured to transmit one or more of alerts and operational information to one or more of the output device and the remote device based on the fluid level in the holding object.
DRAWINGS
[0020] These and other features, aspects, and advantages of the claimed subject
matter will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:
8
[0021] FIG. 1 is a schematic diagram illustrating an exemplary fluid level
monitoring system, according to one embodiment of the present disclosure;
[0022] FIG. 2A and FIG. 2B depict a flow diagram illustrating an exemplary
method for generating a machine-learning model that is configured to monitor fluid levels in a holding object using the system of FIG. 1;
[0023] FIG. 3A is an exemplary graphical representation depicting variations in
values of spectral entropy for a particular acoustic frame selected from acoustic signals used by the system of FIG. 1 for monitoring fluid levels in holding objects;
[0024] FIG. 3B is another exemplary graphical representation depicting variations
in values of spectral energy for a particular acoustic frame selected from acoustic signals used by the system of FIG. 1 for monitoring fluid levels in holding objects;
[0025] FIG. 3C is another exemplary graphical representation depicting a first
smooth curve and a second smooth curve that are obtained after applying the smoothing spline method over the spectral entropy of FIG. 3A and the spectral energy of FIG. 3B, respectively;
[0026] FIG. 4 is a flow diagram illustrating an exemplary method for monitoring
the fluid level in the holding object in real-time using the fluid level monitoring system of FIG. 1;
[0027] FIG. 5A is an exemplary graphical representation illustrating the
performance of the machine-learning model of FIG. 2B that is generated without applying the smoothing spline method over a particular acoustic feature;
[0028] FIG. 5B is another exemplary graphical representation illustrating the
performance of the machine-learning model of FIG. 2B that is generated by applying the smoothing spline method over a particular acoustic feature; and
9
[0029] FIG. 5C is yet another exemplary graphical representation illustrating the
performance of the machine-learning model of FIG. 2B that is generated by applying the smoothing spline method and by tuning one or more hyper-parameters of the machine-learning model.
DETAILED DESCRIPTION
[0030] The following description presents exemplary systems and methods for
monitoring a fluid level in a holding object. Particularly, embodiments described herein disclose systems and methods for monitoring the fluid level in the holding object based on acoustic signals that are generated when the fluid is being filled in the holding object. It may be noted that different embodiments of the present fluid level monitoring system may be used to monitor fluid level in many industrial applications such as in the oil industry, automotive industry, water treatment and purification industry, chemical treatment industry, and gas industry. The fluid level monitoring system may also be used to monitor the fluid level in a household application, for example, for monitoring a beverage level (e.g., a water or coffee level) in a container and for automatically stopping the water inflow when the water in the container reaches a desired level. However, for clarity of explanation, the fluid level monitoring system will be described herein only with reference to the household application in which the fluid level in a holding object is continuously monitored and the fluid inflow is controlled to fill the holding object only up to a desired level, as described in detail with reference to FIG. 1.
[0031] FIG. 1 is a schematic diagram illustrating an exemplary fluid level
monitoring system (100). The fluid level monitoring system (100) includes a fluid source (102), a fluid mobilizer (104), an acoustic chamber (106), an acoustic signal-capturing device (108), a holding object (110), an embedded device (112), and a flow control device (114). In one embodiment, the fluid source (102) is a fluid storage
10
medium that stores the fluid. The fluid source (102) is operatively coupled to the fluid mobilizer (104) that receives the fluid from the fluid source (102) through a channel (116). In certain embodiments, the channel (116) is a pipe that interconnects the fluid source (102) and the fluid mobilizer (104), whereas the fluid mobilizer (104) is a pump. The fluid mobilizer (104) pumps the fluid received from the fluid source (102) into the acoustic chamber (106) through another channel (118).
[0032] The acoustic chamber (106) includes an inlet (120) through which the fluid
flows into the acoustic chamber (106) from the fluid mobilizer (104) and an outlet (122) through which the fluid flows out to the holding object (110). In one embodiment, the acoustic chamber (106) is a sealed chamber that prevents entering of air from the surroundings into the acoustic chamber (106). The acoustic chamber (106) is also a noise-proof chamber that does not allow external noise signals entering into the acoustic chamber (106). The fluid, thus flowing out through the outlet (122) of the acoustic chamber (106), passes through a channel (124) and fills the holding object (110). As the fluid is being filled in the holding object (110), acoustic signals are generated. The fluid level monitoring system (100) records the generated acoustic signals using the acoustic signal-capturing device (108) and uses the recorded acoustic signals for monitoring the fluid level in the holding object (110). An example of the acoustic signal-capturing device (108) is a microphone.
[0033] In certain embodiments, the acoustic signal-capturing device (108) is
placed within the acoustic chamber (106) and may be in direct contact with the fluid in the acoustic chamber (106). In an alternative embodiment, however, a protective casing may be provided for the acoustic signal-capturing device (108) such that the acoustic signal-capturing device (108) may not be in contact with the fluid in the acoustic chamber (106). In one embodiment, the acoustic signals, generated when the fluid is being filled in the holding object (110), are transmitted to the acoustic chamber
11
(106) through the channel (124) and are recorded by the acoustic signal-capturing device (108).
[0034] In an embodiment, the acoustic signal-capturing device (108) transmits the
recorded acoustic signals to the embedded device (112) through a conductive medium (125). An example of the conductive medium (125) is one or more electrical wires. The embedded device (112) corresponds to a processing device, including but not limited to, one or more general-purpose processors, specialized processors, graphical processing units, microprocessors, programming logic arrays, field programming gate arrays, and/or other suitable computing devices. Particularly, in certain embodiments, the embedded device (112) includes an acoustic signal processing system (126) and a machine-learning system (128). The embedded device (112) receives the acoustic signals recorded by the acoustic signal-capturing device (108) and provides the recorded acoustic signals as inputs to the acoustic signal processing system (126).
[0035] The acoustic signal processing system (126) continuously extracts one or
more acoustic features from the acoustic signals. Examples of the extracted acoustic features include but are not limited to one or more mel-frequency cepstral coefficients, spectral energy, spectral entropy, spectral flatness, and spectral flux. In certain embodiments, the extracted acoustic features may serve as signatures or indicators of the fluid level in the holding object (110). Values associated with these acoustic features vary depending on a level of the fluid in the holding object (110). For example, values of the acoustic features extracted from the acoustic signals that are generated when 10 percent of the holding object is filled differ from values of the acoustic features extracted from the acoustic signals that are generated when 11 percent of the holding object is filled. Thus, the fluid level monitoring system (100) uses the extracted acoustic features for monitoring the fluid level in the holding object (110).
[0036] In one embodiment, the acoustic signal processing system (126) further
processes the extracted acoustic features using the method described in greater detail
12
with respect to FIG. 2A and provides the processed acoustic features as one or more inputs to the machine-learning system (128). In certain embodiments, the machine-learning system (128) includes a machine-learning model (130) that is generated and is trained to monitor the fluid level in the holding object (110) based on the processed acoustic features and training data that is used to train the machine-learning model (130), as described in greater detail with respect to FIG. 2A and FIG. 2B. The fluid levels in the holding object (110), thus identified by the machine-learning system (128), are provided as inputs to the flow control device (114).
[0037] In certain embodiments, the flow control device (114) is operatively
coupled to the embedded device (112) and obtains the fluid level information in the holding object (110) from the embedded device (112). Further, the flow control device (114) is configured to stop the fluid inflow into the holding object (110) when the fluid level in the holding object (110) reaches a desired level. In one embodiment, the flow control device (114) is a control relay circuit that opens and/or closes a solenoid valve (not shown in FIG. 1), which is kept at a desired location in a fluid flow path from the fluid source (102) to the holding object (110) for allowing or stopping the fluid inflow into the holding object (110).
[0038] In one embodiment, the fluid level monitoring system (100) further
includes a communication unit (132) that is operatively coupled to one or more of the embedded device (112) and to the flow control device (114). When the fluid is being filled in the holding object (110) in real-time, the communication unit (132) obtains the fluid level information in the holding object (110) from the embedded device (112) and sends one or more status messages to a remote device (134) and/or an output device (136) such as an audio-video display unit to provide real-time fluid level information to a user associated with the remote device (134). The status messages may be sent to the remote device (134) and/or the output device (136) at regular designated time intervals or continuously. Alternatively, the communication unit (132) sends an alert
13
message to the remote device (134) and/or the output device (136) to notify or alert the user about the fluid level in the holding object (110) before the fluid level in the holding object (110) reaches a desired level.
[0039] In one embodiment, the communication unit (132) communicates with the
remote device (134) via a communication network, for example, a Wi-Fi network, an Ethernet, and a cellular data network. Examples of the remote device (134) include a cellular phone, a laptop, a tablet-computing device, and a desktop computer. In one embodiment, the user remotely controls the fluid level in the holding object (110) using an application residing on the remote device (134). For example, the user may provide one or more actionable inputs using the remote device (134) to control the fluid flow in the holding object (110) prior to initiation or during the flow of the fluid into the holding object (110). Examples of the actionable user inputs include an indication to stop the fluid flow, change a rate at which the fluid currently fills the holding object (110), set a desired volume of fluid to be filled, set a desired instant of time or a period of time for filling fluid in the holding object (110), etc. The communication unit (132) receives the one or more actionable inputs from the remote device (134) and provides the received inputs to the flow control device (114) in order to control the fluid flow based on the received inputs. A methodology employed for generating the machine-learning model (130) that is configured to monitor the fluid level in the holding object (110) is described in detail with reference to FIG. 2A.
[0040] FIG. 2A is a flow diagram (200) illustrating an exemplary method for
generating the machine-learning model (130) of the machine-learning system (128) that is configured to monitor the fluid level in the holding object (110) of FIG. 1. In certain embodiments, the machine-learning model (130) is generated offline, and subsequently the generated machine-learning model (130) is used for monitoring the fluid level in real-time. For generating the machine-learning model (130), the fluid level monitoring system (100) described with respect to FIG. 1 can be used except that
14
the embedded device (112) in a model generation scenario acts as a storage device for storing the acoustic signals. The order in which the exemplary method (200) is described is not intended to be construed as a limitation, and any number of the described blocks may be combined in any order to implement the exemplary method disclosed herein, or an equivalent alternative method. Additionally, certain blocks may be deleted from the exemplary method or augmented by additional blocks with added functionality without departing from the spirit and scope of the subject matter described herein.
[0041] At step (202), the acoustic signal processing system (126) receives acoustic
signals that are generated when the fluid is being filled up to a plurality of selected levels in at least one type of the holding object (110) from the acoustic signal-capturing device (108. In one embodiment, when the acoustic signal-capturing device (108) records the generated acoustic signals, a sampling rate of 44100 Hz is maintained. When generating the machine-learning model (130), the acoustic signals generated when the fluid is being filled in various types of containers up to various selected levels are recorded and a plurality of audio files are generated based on the recorded acoustic signals.
[0042] For example, acoustic signals generated when the fluid is being filled in a
first type of container up to a first level (e.g., 1 percent of the container volume) are recorded and a first audio file is generated based on the recorded acoustic signals. Examples of the types of containers include plastic containers, glass containers, metal containers, and paper containers. Further, the containers may be of different shapes and sizes, and accordingly the containers may have different volumes. In another example, acoustic signals generated when the fluid is being filled in the first type of container up to a second level (e.g., 5 percent of the container volume) are recorded and a second audio file is generated based on the acoustic signals. In yet another example, acoustic signals generated when the fluid is being filled in a second type of
15
container up to a first level (e.g., 10 percent of the container volume) are recorded and a third audio file is generated based on the acoustic signals.
[0043] Similarly, it is to be understood that, the plurality of audio files are
generated based on the acoustic signals generated when the fluid is being filled in various types of containers up to various selected levels. The generated audio files having the acoustic signals are then used to generate the machine-learning model (130) such that the machine-learning model (130) monitors the fluid at any level in any type of containers in real-time.
[0044] At step (204), the acoustic signals received from the acoustic signal-
capturing device (108) are fragmented into a plurality of acoustic frames. More specifically, each of the generated audio files having the acoustic signals that are fragmented into the plurality of acoustic frames, and each of the acoustic frames is of a designated frame size. In one embodiment, the designated frame size is 1024 and hence each acoustic frame has 1024 audio samples. For example, a 10 seconds audio file including the acoustic signals may be fragmented into 10 acoustic frames, and hence each acoustic frame length is one-second. Each one-second acoustic frame may have 1024 audio samples. In certain embodiments, the designated frame size is kept at 1024 for extracting the acoustic features from the acoustic frames such that there is a 50 percent overlap between two subsequent acoustic frames. For example, while extracting a first set of acoustic features from a first acoustic frame, acoustic information from the first second is considered. Whereas, while extracting a second set of acoustic features, acoustic information from 0.5 second to 1.5 seconds is considered.
[0045] At step (206), one or more acoustic features are extracted from each of the
acoustic frames. The acoustic signal processing system (126) reads the audio files having the fragmented audio frames at the sampling rate (e.g., 44100 Hz), which is same as the sampling rate maintained at the time of recording the generated acoustic
16
signals. The audio files having the fragmented audio frames are subjected to a window function such that less side lobes are generated when compared to main lobes. As noted previously, the extracted acoustic features from the acoustic frames, include but are limited to, one or more mel-frequency cepstral coefficients, spectral energy, spectral entropy, spectral flatness, and spectral flux. A methodology or a technique used to determine a value associated with each of the acoustic features is described in the subsequent paragraphs.
[0046] The mel-frequency cepstral coefficients (MFCCs) are coefficients that
collectively make up a mel-frequency cepstrum (MFC) that is a representation of the short-term power spectrum of an acoustic signal. In one embodiment, the MFCCs are derived from each of the fragmented acoustic frames by performing a digital cosine transform on a mel-signal. It is to be noted that transformation of the mel-signal based on the digital cosine transform ensures that the acoustic signals under consideration have a representation of spectrum that is sensitive in a way similar to how human hearing works.
[0047] Further, in certain embodiments, while transforming the mel-signal using
the digital cosine transform, 40 cepstral coefficients including higher order cepstral coefficients and lower order cepstral coefficients are generated from every acoustic frame of the acoustic signals. In one embodiment, for generating the machine-learning model (130), only 13 cepstral coefficients that correspond to the lower order cepstral coefficients are considered. The remaining cepstral coefficients that correspond to the higher order cepstral coefficients are discarded as those cepstral coefficients were determined as being not critical for monitoring the fluid level in the holding object (110) in real-time. The mel-signal includes filter banks are generally overlapped with each other, and hence acoustic energies associated with the filter banks are correlated with each other. The digital cosine transform is performed on the mel-signal to de-correlate the acoustic energies associated with the filter banks. Thus, diagonal
17
covariance matrices having the mel-frequency cepstral coefficients can be used to generate the machine-learning model (130) using any machine-learning algorithms, for example, using a support vector machine.
[0048] In certain embodiments, a value associated with an acoustic feature, that is, the spectral energy is determined in accordance with equation (1).
�� = {�(�),�(�)} = ∫-∞ ∞|� (�)|2�� (1)
where, �� represents the spectral energy and �(�) represents the acoustic signals in time-domain representation. Generating the machine-learning model (130) using the spectral energy enables the machine-learning model (130) to correlate the spectral energy to a particular fluid level in the holding object (110) in real-time. The value determined using the equation (1) provides the spectral energy for a window, and hence the spectral energy associated with each of the acoustic frames is obtained by normalizing the determined spectral energy using a corresponding acoustic frame length.
[0049] In one embodiment, another acoustic feature, which is spectral entropy provides data on how differently the spectral energy distributes when the fluid fills various types of holding objects up to various levels. The machine-learning model (130) is generated using the spectral entropy during a model generation scenario such that a generated machine-learning model (130) correlates the spectral entropy to a particular fluid level in the holding object (110) irrespective of a type of the holding object (110) used in real-time. In an embodiment, the spectral entropy is calculated in accordance with equation (2).
��� = -∑��=1�� ln�� (2)
18
where, PSE corresponds to power spectral entropy, and �� corresponds to a probability density function that is obtained by normalizing a power density function �(��) in accordance with equation (3).
where, the power density function �(��) is obtained using N that represents a length of an acoustic signal, �(��) that represents the acoustic signal, and �� represents an angular frequency of the acoustic signal in accordance with equation (4).
�(��)=�1 |�(��)|2 (4)
[0050] In certain embodiments, a further acoustic feature that corresponds to spectral flatness is a measure used to characterize an audio spectrum. The machine-learning model (130) is generated using the spectral flatness during a model generation scenario such that a generated machine-learning model (130) correlates the spectral flatness to a particular fluid level in the holding object (110) irrespective of a shape of the holding object (110) used in real-time. The spectral flatness is calculated by dividing the geometric mean of the power spectrum by the arithmetic mean of the power spectrum in accordance with equation (5).
�
∏�=-01 �(�)
exp(�1 ∑��=-01ln�(�))
�= ∑��=-01�(�)= 1 �∑���=-=010�(�) (5)
� �
where, N represents length of an acoustic signal and �(�) represents a magnitude of a bin number n.
[0051] In certain embodiments, an acoustic feature that includes the spectral flux is a measure of how quickly the power spectrum of an acoustic signal is changing. The spectral flux is calculated by comparing the power spectrum for one acoustic frame
19
against the power spectrum from the previous acoustic frame. The machine-learning model (130) is generated using the spectral flux such that the spectral flux assists the machine-learning model (130) to counter any bias that may be introduced in the machine-learning model (130) by the usage of spectral energy. Thus, the acoustic features are extracted from each of the acoustic frames.
[0052] At step (208), the fragmented acoustic frames and the acoustic features
corresponding to the acoustic frames are segmented into training data and validation data. The training data is used to generate the machine-learning model (130) and whereas the validation data is used to validate the performance of the generated machine-learning model (130) in identifying the fluid levels in the holding objects. In certain embodiments, the training data includes sets of acoustic features extracted from the acoustic frames and the fluid level corresponding to each set.
[0053] For example, the training data includes a set having values associated with
the extracted acoustic features where a first MFCC 1 is 0.132, a second MFCC 2 is -0.004, a thirteenth MFCC 13 is 0.001, the spectral energy is 0.0002 joules, the spectral entropy is 0 bits/nats, the spectral flatness is 0.906 and the spectral flux is -29.2 Hz-1. Values of the acoustic features in the set are represented using certain exemplary units merely for the sake of explanation and it is to be understood that the training data may include the values of the acoustic features in any suitable units. The acoustic features in the exemplary set are extracted from an acoustic frame that corresponds to a specific fluid level in the holding object (110). For example, the acoustic frame may correspond to a fluid level when 10 percent of the holding object (110) is filled. Hence, the training data will include the exemplary set and the fluid level corresponding to the exemplary set as 10 percent of the total volume of the holding object (110). Similarly, it is to be understood that the training data includes a plurality of sets of acoustic features and a fluid level corresponding to each set.
20
[0054] In certain embodiments, the validation data includes only sets of acoustic
features extracted from the acoustic frames and may not include the fluid level corresponding to each set. Once the machine-leaning model (130) is generated using the training data, the validation data including sets of acoustic features are provided as inputs to the machine-learning model (130), and the performance of the machine-learning model (130) in identifying the fluid levels for the given inputs is monitored to validate performance of the generated machine-learning model (130).
[0055] Further, at step (210), the training data is segmented into at least two
training datasets for training the machine-learning model (130) to monitor the fluid level across various ranges. For the sake of simplicity, the presently described embodiments herein provide segmentation of the training data into three datasets including first training dataset, second training dataset, and third training dataset. The first training dataset includes sets of acoustic features and the fluid level associated with each of the sets where the fluid level falls within a first range. For example, the sets having the acoustic features, which are all extracted from the acoustic signals generated when the fluid is being filled in the holding objects from zero to 40 percent of the total volume of the holding objects, are categorized as the first training dataset.
[0056] Similarly, the second training dataset includes sets of acoustic features that
are all extracted from the acoustic signals generated when the volume of the fluid filled in the holding objects (110) falls in a second range, for example from 40 percent to 80 percent of the total volume of the holding objects. The third training dataset includes sets of acoustic features that are all extracted from the acoustic signals generated when the volume of the fluid filled in the holding objects (110) falls in a third range, for example from 80 percent to the total volume of the holding objects.
[0057] At step (212), the training data including the sets of extracted acoustic
features are normalized to obtain normalized training data. More specifically, the values associated with all types of extracted acoustic features are normalized such that
21
the values fall within a selected range, for example, in between 0 to 1. Hence, the values of the extracted acoustic features in the first training dataset, the second training dataset, and the third training dataset are normalized to be within their corresponding selected ranges.
[0058] At step (214), the normalized values associated with one or more specific
types of acoustic features are fitted to a smooth curve using a spline function based on a smoothing spline method to obtain smooth splined training data. The smoothing spline is a method of fitting the smooth curve to a set of noisy observations using the spline function and the splining is performed using a controlling parameter lambda (λ). In certain embodiments, the lambda (λ) is a smoothing parameter that smoothens variations in the values of an acoustic feature and controls trade-off between fidelity and roughness of the values associated with the acoustic feature. A roughness penalty is identified based on a second order derivative and is used to obtain the trade-off. Further, the splining provides a method of balancing a measure of fit with a penalty term based on sums-of-squares. The penalty term may be used as a prior distribution and may be used to estimate the spline based on Bayes theorem.
[0059] Fitting the normalized values associated with an acoustic feature to a
smooth curve to smoothen variations in the values of the acoustic feature using the spline function is depicted in FIGS. 3A, 3B, and 3C. FIG. 3A is an exemplary graphical representation (300) depicting variations in the values of the spectral entropy for a particular acoustic frame before applying the smoothing spline method. Similarly, FIG. 3B is another exemplary graphical representation (302) depicting variations in the values of the spectral energy for a particular acoustic frame before applying the smoothing spline method. FIG. 3C is yet another graphical representation (304) depicting a first smooth curve (306) (represented using a solid line in FIG. 3C) that is obtained after applying the smoothing spline method over the spectral entropy of FIG. 3A. Further, FIG. 3C depicts a second smooth curve (308) (represented using a dotted
22
line in FIG. 3C) that is obtained after applying the smoothing spline method over the spectral energy of FIG. 3B. An x-axis of the graphical representation (304) represents a percentage of fluid filled in the holding object (110) and a y-axis of the graphical representation (304) represents the normalized values of the spectral entropy and the spectral energy.
[0060] Application of the smoothing spline method over a particular acoustic
feature updates the values of the particular acoustic feature in the training data. For example, when the smoothing spline method is applied over the spectral entropy values, the normalized values of the spectral entropy in the first training dataset, the second training dataset, and the third training dataset are updated based on the spline function. Thus, the first training dataset having sets of normalized acoustic features are updated to first smooth splined dataset having updated values of the spectral entropy but values associated with other acoustic features remain same. Similarly, it is to be understood that the second training dataset and the third training dataset having sets of normalized acoustic features are updated to second smooth splined dataset and third smooth splined dataset, respectively. Throughout the description of various embodiments that are presented herein, the first, second and third smooth splined datasets are collectively referred to as the smooth splined training data.
[0061] Referring back to description of FIG. 2A, at step (216), the machine-
learning model (130) is generated based on the smooth splined training data. The machine-learning model (130) is generated using one or more machine-learning algorithms including, but not limited to, support vector machines, cubist and lasso regression, decision tree learning, association rule learning, artificial neural networks, deep learning, inductive logic programming, clustering, bayesian networks, reinforcement learning, representation learning, similarity and metric learning, sparse dictionary learning, and genetic algorithms. The machine-learning model (130) may also be generated using rule-based machine learning and learning classifier systems.
23
In certain embodiments, hyper-parameters associated with the machine-learning model (130) may be finalized after several rounds of cross validation and experimentation. Examples of the hyper-parameters associated with the machine-learning model (130) include a type of networks to be used for regression such as linear networks or radial basis function networks, a learning rate, a number of latent factors in a matrix factorization, a number of hidden layers in a deep neural network, and a number of clusters in a k-means clustering.
[0062] In one embodiment, the machine-learning model (130) is generated using a
specific type of machine-learning algorithm. For example, the machine-learning model (130) is generated using a support vector machine. In this scenario, the smooth splined training dataset, including the first smooth splined dataset, the second smooth splined dataset, and the third smooth splined dataset, is provided as input to the support vector machine. Thus, the generated machine-learning model (130) is configured to monitor the fluid level in the holding object (110) using the support vector machine in real-time.
[0063] In another embodiment, the machine-learning-learning model (130) is
generated using more than one type of machine-learning algorithms. For example, the machine-learning model (130) may be generated using a support vector machine and a cubist and lasso regression. In this example, the first and second smooth splined datasets may be provided as inputs to the support vector machine, and the third smooth splined dataset may be provided as input to the cubist and lasso regression. Thus, the generated machine-learning model (130) uses the support vector machine for monitoring the fluid level in the first range (e.g., from zero to 40 percent of the total volume of the holding object 110) and the second range (e.g., from 40 percent to 80 percent of the total volume of the holding object 110) in real-time. Further, the generated machine-learning model (130) uses the cubist and lasso regression for
24
monitoring the fluid level in the third range (i.e., from 80 percent to the total volume) in the real time.
[0064] Subsequent to generation of the machine-learning model (130) based on the
smooth splined training data and one or more desired machine-learning algorithms, the performance of the generated machine-learning model (130) in identifying the fluid levels in the holding objects is validated based on the validation data as described in detail with reference to FIG. 2B. From the validation data, which is obtained by segmenting the acoustic frames and the acoustic features corresponding to the acoustic frames, first validation dataset including sets of acoustic features are selected. The first validation dataset having the sets of acoustic features is extracted from the acoustic signals generated when the volume of the fluid being filled in the sample holding objects is in the first range (e.g., from zero to 40 percent of the total volume of the sample holding objects.
[0065] Similarly, from the validation data, second validation dataset and third
validation dataset are selected. The second validation dataset having sets of acoustic features is extracted from the acoustic signals generated when the volume of the fluid being filled in the sample holding objects is between approximately 40 percent to 80 percent of the total volume of the holding objects. The third validation dataset having sets of acoustic features is extracted from the acoustic signals generated when the volume of the fluid being filled in the sample holding objects is between 80 percent and 100 percent of the total volume of the corresponding holding objects. Thus, the selected first, second and third validation datasets are used to validate the generated machine-learning model (130) as described in the subsequent paragraphs.
[0066] At step (218), a set of acoustic features selected from each of the first
validation dataset, the second validation, and third validation dataset are provided as inputs to the generated machine-learning model (130). At step (220), the machine-learning model (130) identifies a corresponding fluid level in the holding object (110)
25
for each of the given inputs based on the training provided to the machine-learning model (130). At step (222), a difference between the identified fluid level and an actual fluid level for each of the given inputs is determined. At step (224), the difference between the identified fluid level and the actual fluid level for each of the given inputs is checked to identify if the difference falls within a corresponding defined threshold. If the difference identified for each of the given inputs falls within the corresponding defined threshold, the performance of the generated machine-learning model (130) in identifying the fluid levels in the holding objects is considered acceptable.
[0067] In one exemplary implementation, a set of acoustic features selected from
the first validation dataset may be provided as an input to the machine-learning model (130). An actual fluid level in the holding object (110) is 30 percent of the total volume and a defined acceptable error threshold for monitoring the fluid level in the first range (i.e., 0-40 percent of the total volume) is 1 percent. In this example, for the given set of acoustic features, if the machine-learning model (130) identifies that the fluid level in the holding object (110) is 10 percent of the total volume, then performance of the generated machine-learning model (130) for monitoring the fluid level in the first range may be considered as unacceptable. Similarly, one or more sets of acoustic features selected from the second and third validation datasets are provided as inputs to the machine-learning model (130), and the performance of the machine-learning model (130) in identifying the fluid levels in the second range (i.e., 40-80 percent of the total volume) and the third range (i.e., 80-100 percent volume) is evaluated.
[0068] In certain embodiments, the acceptable error threshold associated with the
second range may be selected to be comparatively lesser than the acceptable error threshold associated with the first range. An exemplary acceptable error threshold associated with the second range is 0.5 percent. Further, the acceptable error threshold may be selected to be comparatively lesser for the third range when compared to the first and second ranges because of the fact that enormous error in the identified fluid
26
level in the third range may cause overflow of the fluid from the holding object (110). For example, if the holding object (110) is already filled to 98 percent of its volume and if the machine-learning model (130) identifies that the holding object (110) is filled only up to 90 percent of its volume, the error may lead to overflow of the fluid from the holding object (110). Therefore, an exemplary acceptable error threshold associated with the third range may be 0.2 percent.
[0069] Subsequent to successful validation of the performance of the machine-
learning model (130) in identifying the fluid levels across all ranges, at step (226), the validated machine-learning model (130) is saved in the embedded device (112) of FIG. 1 for monitoring the fluid level in the holding object (110) in real-time. In certain embodiments, when the performance of the machine-learning model (130) in identifying the fluid levels across all ranges are not acceptable, a new machine-learning model (130) is generated and is validated before saving the new machine-learning model (130) in the embedded device (112).
[0070] In one embodiment, the new machine-learning model (130) is generated by
changing one or more hyper-parameters associated with the failed machine-learning model (130). For example, changing a hyper-parameter includes changing the kernel associated with the failed machine-learning model (130) from the linear networks to the radial basis function networks. In another embodiment, the new machine-learning model (130) is generated by changing the machine-learning algorithm used in the failed machine-learning model (130). For example, when the failed machine-learning model (130) uses the support vector machine for monitoring the fluid level, the new machine-learning model (130) may use the neural networks.
[0071] In certain scenarios, the performance of the machine-learning model (130)
may be determined to be acceptable in identifying the fluid levels in certain ranges but not acceptable in identifying the fluid levels in some other ranges. For example, the generated machine-learning model (130) may appropriately identify the fluid level in
27
the first and second ranges with defined acceptable error thresholds but may not appropriately identify the fluid level in the third range. In such scenarios, the smoothing spline method is applied on an acoustic feature that is different from a previously selected acoustic feature. For example, if the smoothing spline method was applied over the spectral entropy previously, the smoothing spline method may be applied over the spectral energy. Accordingly, the training data having the normalized values associated with the spectral energy are updated by fitting the normalized values of the spectral energy to a smooth curve using the spline function, as noted previously, to obtain an updated training data. With the updated training data, the machine-learning model (130) is trained again to monitor the fluid level across all ranges and is validated based on the validation data. Subsequently, the validated machine-learning model (130) is stored in the embedded device (112).
[0072] In another example, the generated machine-learning model (130) uses the
support vector machine for monitoring the fluid level in the first range and uses the cubist and lasso regression for monitoring the fluid level in the second and third ranges. The support vector machine may appropriately identify the fluid level in the first range but the cubist and lasso regression may not appropriately identify the fluid level in the second and third ranges. In this example, the generated machine-learning model (130) is updated by using a new machine-learning algorithm (e.g., neural networks) instead of the cubist and lasso regression for monitoring the fluid level in the second and third ranges. The updated machine-learning model (130) is validated, and thus the validated machine-learning model (130) is stored in the embedded device (112) for monitoring the fluid level in different types of holding objects used during the training process across all ranges in real-time as described in detail with reference to description of FIG. 4.
[0073] FIG. 4 is a flow diagram (400) illustrating an exemplary method for
monitoring the fluid level in the holding object (110) in real-time using the fluid level
28
monitoring system (100) of FIG. 1. As previously noted with reference to the description of FIG. 1, the fluid passes out of the acoustic chamber (106) and fills the holding object (110). When the fluid is being filled in the holding object (110), the acoustic signals are generated. The acoustic signal-capturing device (108) records the generated acoustic signals and provides the generated acoustic signals as one or more inputs to the acoustic signal processing system (126). Thus, at step (402), the acoustic signal processing system (126) continuously or at regular intervals receives the acoustic signals that are generated when the fluid is being filled in the holding object (110) from the acoustic signal-capturing device (108).
[0074] Further, at step (404), the acoustic signal processing system (126)
fragments the received acoustic signals into a plurality of acoustic frames and each of the acoustic frames is of a designated frame size, for example, each acoustic frame has 1024 audio samples. At step (406), the acoustic signal processing system (126) extracts one or more acoustic features from each of the acoustic frames, as previously described with reference to the description of FIG. 2A. Therefore, each of the acoustic frames has a corresponding set of acoustic features. At step (408), the acoustic signal processing system (126) normalizes values of the extracted acoustic features in each set such that the values of the extracted acoustic features fall within a selected range, for example in between 0 to 1. Collective sets of acoustic features having the normalized values are referred herein as normalized acoustic data.
[0075] At step (410), the acoustic signal processing system (126) fits the
normalized values associated with a specific type of acoustic feature in each set to a smooth curve using a spline function to obtain smooth splined acoustic data. In one embodiment, the specific type of acoustic feature is same as the acoustic feature upon which the smoothing spline method is applied during generation of the machine-learning model (130). For example, if the smoothing spline method is applied over the spectral entropy during generation of the machine-learning model (130), the smoothing
29
spline method is applied over the same acoustic feature (i.e., the spectral entropy) in real-time as well for monitoring of the fluid level in the holding object (110).
[0076] Application of the smoothing spline method over a selected acoustic feature
by fitting the normalized values to the smooth curve updates the normalized values of the selected acoustic feature in each set. Thus, the sets of acoustic features are further updated after application of the smoothing spline method over the selected acoustic feature and such updated sets of acoustic features are referred herein as smooth splined acoustic data. At step (412), the acoustic signal processing system (126) provides the smooth splined acoustic data as an input to the generated machine-learning model (130).
[0077] At step (414), the generated machine-learning model (130) continuously
monitors the fluid level when the fluid is being filled in the holding object (110) based on the smooth splined acoustic data and the smooth splined training data that is used to generate the machine-learning model (130). More specifically, the generated machine-learning model (130) identifies, for each set of acoustic features in the smooth splined acoustic data, a particular fluid level in the holding object (110) by identifying a fluid level associated with a same or a similar set of acoustic features in the smooth splined training data.
[0078] For example, a first set of acoustic features selected from the smooth
splined acoustic data includes MFCC 1 with an associated value of 0.1, spectral entropy with an associated value of 0.2 bits/nats, spectral energy with an associated value of 0.3 joules, spectral flatness with an associated value of 0.8, and spectral flux with an associated value of 1.0 Hz -1. If the fluid level associated with the same set of acoustic features in the smooth splined training data is 1 percent of the total volume of the holding object (110), then, in real-time, the machine-learning model (130) identifies that the fluid level in the holding object (110) is 1 percent of the total volume based on the first set of acoustic features and the smooth splined training data. Similarly, it is to
30
be understood that as and when the holding object (110) is being filled, the machine-learning model (130) continuously monitors the fluid level in the holding object (110) by identifying a particular fluid level associated with each subsequent set of acoustic features in the smooth splined acoustic data based on the smooth splined training data.
[0079] Once the fluid reaches a desired level in the holding object (110) identified
by the machine-learning model (130), at step (416), the flow control device (114) stops the fluid inflow into the holding object (110). In one embodiment, the flow control device (114) stops the fluid inflow by closing a solenoid valve, which is kept at a desired location in the fluid flow path from the fluid source (102) to the holding object (110).
[0080] Throughout description of various embodiments presented herein,
application of the smoothing spline method over a particular acoustic feature and tuning of one or more hyper-parameters associated with the machine-learning model (130) improves the performance of the machine-learning model (130) in identifying the fluid levels in the holding objects. FIGs. 5A and 5B illustrate a difference in performance of the machine-learning model (130) in identifying the fluid levels with and without application of the smoothing spline method over a particular acoustic feature.
[0081] In particular, FIG. 5A is a graphical representation (500) illustrating the
performance of the machine-learning model (130) in identifying the fluid levels when the machine-learning model (130) is generated without applying the smoothing spline method over a particular acoustic feature. Specifically, the graphical representation (500) depicts a comparison between the fluid levels (502) identified by the machine-learning model (130) for given inputs of validation datasets and the actual fluid levels (504). As evident from the depictions of FIG. 5A, there are a lot of deviations between the identified fluid levels (502) and the actual fluid levels (504) when the machine-
31
learning model (130) is generated without applying the smoothing spline method over an acoustic feature.
[0082] FIG. 5B is an exemplary graphical representation (506) illustrating the
performance of the machine-learning model (130) in identifying the fluid levels when the machine-learning model (130) is generated by applying the smoothing spline method over a particular acoustic feature. As evident from the depictions of FIG. 5B, the deviations between the identified fluid levels (508) and the actual fluid levels (510) are minimal when compared to the FIG. 5A.
[0083] FIG. 5C is another exemplary graphical representation (512) illustrating the
performance of the machine-learning model (130) in identifying the fluid levels when the machine-learning model (130) is generated by applying the smoothing spline method over a particular acoustic feature and also by tuning one or more hyper-parameters of the machine-learning model (130). In this scenario, the deviations between the identified fluid levels (514) and the actual fluid levels (516) are very minimal when compared to FIG. 5A and FIG. 5B. Hence, the performance of the machine-learning model (130) in identifying the fluid levels improves when applying the smoothing spline method over a particular acoustic feature and by tuning one or more hyper-parameters of the machine-learning model (130).
[0084] The fluid level monitoring system (100) described herein uses the machine-
learning system (128) having the machine-learning model (130) for monitoring the fluid level in different types of holding objects. Unlike the existing systems that use expensive sensors such as ultrasonic sensors, optical sensors, and wireless sensors, etc. that may require frequent calibrations, the fluid level monitoring system (100) uses the machine-learning model (130) that makes the fluid level monitoring system (100) comparatively less expensive. Further, the machine-learning model (130) uses a plurality of acoustic features for monitoring the fluid level in the holding object (110),
32
which improves the accuracy and the performance of the machine-learning model (130) in identifying the fluid levels.
[0085] Although specific features of various embodiments of the present systems
and methods may be shown in and/or described with respect to some drawings and not in others, this is for convenience only. It is to be understood that the described features, structures, and/or characteristics may be combined and/or used interchangeably in any suitable manner in the various embodiments shown in the different figures.
[0086] While only certain features of the present systems and methods have been
illustrated and described herein, many modifications and changes will occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the claimed invention.
33
Claims:
1. A method for monitoring a fluid level, the method comprising:
receiving acoustic signals that are generated when a fluid is being filled in a holding object (110);
extracting one or more acoustic features from the received acoustic signals;
providing one or more inputs comprising the extracted acoustic features to a machine-learning system (128) having a machine-learning model (130); and
monitoring the fluid level in the holding object (110) using the machine-learning model (130) based on the extracted acoustic features and training data that is used to train the machine-learning model (130), wherein the training data comprises the acoustics features corresponding to various levels of the fluid in at least one type of holding object (110); and
automatically controlling the fluid level in the holding object (110) to be within a desired range based on the monitoring information.
2. The method as claimed in claim 1, wherein the one or more acoustic features comprise one or more mel-frequency cepstral coefficients, spectral energy, spectral entropy, spectral flatness, and spectral flux, wherein values associated with the extracted acoustic features vary based on a fluid level in the holding object (110).
3. The method as claimed in claim 2, further comprising:
receiving acoustic signals that are generated when the fluid is being filled to one or more selected levels in different types of the holding object (110), the different types comprising different shapes, sizes, volumes, and constituent materials corresponding to the holding object (110);
fragmenting each of the received acoustic signals into a plurality of acoustic frames, wherein each acoustic frame is of a designated frame size;
extracting the acoustic features from each of the acoustic frames; and
generating the training data to train the machine-learning model (130) based on the fragmented acoustic frames, wherein the training data comprises one or more sets of extracted acoustic features and a fluid level corresponding to each of the sets.
4. The method as claimed in claim 3, further comprising:
normalizing values of the extracted acoustic features in the training data to be within a selected range to obtain normalized training data comprising normalized acoustic features;
fitting normalized values associated with at least one type of acoustic feature selected from the extracted acoustic features to a smooth curve using a spline function to obtain a smooth splined acoustic feature, wherein the normalized training data is updated to smooth splined training data based on the smooth splined acoustic feature; and
generating the machine-learning model based on the smooth splined training data.
5. The method as claimed in claim 4, further comprising segmenting the fragmented acoustic frames and the acoustic features corresponding to the acoustic frames into the training data and validation data, wherein the training data is used to generate the machine-learning model (130), and wherein the validation data is used to validate the generated machine-learning model (130).
6. The method as claimed in claim 5, further comprising segmenting the training data into at least a first training dataset and a second training dataset, wherein the first training dataset comprises one or more sets of acoustic features that are used to monitor the fluid level in the holding object (110) in a first range, and wherein the second training dataset comprises other sets of acoustic features that are used to monitor the fluid level in the holding object (110) in a second range.
7. The method as claimed in claim 6, wherein the generated machine-learning model (130) comprises a machine-learning algorithm that is trained with both the first training dataset and the second training dataset to monitor the fluid level in the first range and the second range.
8. The method as claimed in claim 6, wherein the generated machine-learning model (130) comprises a first machine-learning algorithm that is trained with the first training dataset to monitor the fluid level in the first range and a second machine-learning algorithm that is trained with the second training dataset to monitor the fluid level in the second range.
9. The method as claimed in claim 8, further comprising:
providing first validation dataset comprising a set of acoustic features that corresponds to a fluid level in the first range as an input to the generated machine-learning model (130);
determining if a difference between the fluid level identified by the generated machine-learning model (130) based on the first validation dataset and an actual fluid level is identified to be within a first defined threshold value;
validating a performance of the generated machine-learning model (130) in identifying the fluid level in the first range based on the determined difference;
providing second validation dataset comprising a set of acoustic features that corresponds to a fluid level in the second range as an input to the generated machine-learning model (130);
determining if a difference between the fluid level identified by the generated machine-learning model (130) based on the second validation dataset and an actual fluid level is identified to be within a second defined threshold value, wherein the second defined threshold value is lesser than the first defined threshold value; and
validating a performance of the generated machine-learning model (130) in identifying the fluid level in the second range based on the determined difference.
10. The method as claimed in claim 9, further comprising:
updating the training data by fitting normalized values associated with another acoustic feature that is different from a previously selected acoustic feature to a smooth curve using a spline function to obtain updated training data having an updated smooth splined acoustic feature, wherein update of the training data is performed when the performance of the generated machine-learning model (130) in identifying the fluid level in at least one of the first range and the second range is not acceptable; and
generating an updated machine-learning model (130) based on the updated training data.
11. The method as claimed in claim 10, further comprising updating the generated machine-learning model (130) by changing a previously used machine-learning algorithm to monitor the fluid level in the first range by another machine-learning algorithm when the performance of the previously used machine-learning algorithm is not acceptable in identifying the fluid level in the first range.
12. The method as claimed in claim 11, further comprising updating the generated machine-learning model (130) by changing a previously used machine-learning algorithm to monitor the fluid level in the second range by another machine-learning algorithm when the performance of the previously used machine-learning algorithm is not acceptable in identifying the fluid level in the second range.
13. The method as claimed in claim 1, further comprising:
normalizing values associated with the extracted acoustic features to be within a selected range to obtain normalized acoustic data;
fitting normalized values associated with a specific type of acoustic feature that is smooth splined during generation of the machine-learning model (130) to a smooth curve using a spline function to obtain smooth splined acoustic data, wherein the smooth splined acoustic data is provided as the input to the machine-learning model (130) to monitor the fluid level in the holding object (110); and
stopping a fluid inflow into the holding object (110) when the fluid level in the holding object (110) reaches a desired level.
14. A system (100) for monitoring a fluid level, the system (100) comprising:
an acoustic signal-receiving device (108) that receives one or more acoustic signals that are generated when a fluid is being filled in a holding object (110); and
a processing device (112) operatively coupled to the acoustic signal-receiving device (108) and comprising an acoustic signal processing system (126) and a machine learning system (128) comprising a machine-learning model (130), wherein the acoustic signal processing system (126) is configured to extract one or more acoustic features from the received acoustic signals, wherein the machine-learning model (130) is configured to monitor the fluid level in the holding object (110) based on the extracted acoustic features and training data used to train the machine-learning model (130), and wherein the training data comprises the acoustics features corresponding to various levels of the fluid in at least one type of the holding object (110).
15. The system (100) as claimed in claim 14, wherein the acoustic signal processing system (126) is further configured to:
normalize values associated with the extracted acoustic features to be within a selected range to obtain normalized acoustic data; and
fit normalized values associated with a specific type of acoustic feature that is smooth splined during generation of the machine-learning model to a smooth curve using a spline function to obtain smooth splined acoustic data, wherein the smooth splined acoustic data is provided as an input to the machine-learning model (130) to monitor the fluid level in the holding object (110).
16. The system (100) as claimed in claim 14, further comprising an acoustic chamber (106) inside which the acoustic signal-receiving device (108) is placed, wherein the acoustic chamber (106) is a noise-proof chamber that prevents undesired acoustic signals in surrounding environment from entering into the acoustic chamber (106), wherein the acoustic signal-receiving device (108) is a microphone.
17. The system (100) as claimed in claim 14, further comprising a fluid flow control device (114) that is communicatively coupled to the processing device (112) and is configured to control a fluid inflow into the holding object (110) based on the monitored information.
18. The system (100) as claimed in claim 14, wherein the system (100) comprises a home water dispensing unit, a beverage dispensing unit, a fuel dispensing unit, a fluid storage unit, a water treatment and purification unit, and a chemical treatment unit.
19. The system (100) as claimed in claim 14, further comprising a communication unit (132) communicatively coupled to a remote device (134) configured to control the fluid level in the holding object (110) by transmitting one or more actionable user inputs to the communication unit (132), the actionable inputs comprising an indication to stop the fluid flow, change a rate of the fluid flow, set a desired volume of fluid to be filled, set a desired instant of time for initiating fluid flow, and set a desired period of time for filling fluid in the holding object (110).
20. The system (100) as claimed in claim 19, further comprising an output device (136) communicatively coupled to one or more of the processing device (112), the communication unit (132), and the remote device (134), wherein the communication unit (132) is configured to transmit one or more of alerts and operational information to one or more of the output device (136) and the remote device (134) based on the fluid level in the holding object (110).
, Description:
BACKGROUND
[0001] Embodiments of the present specification relate generally to a system and method for monitoring a level of a fluid in a holding object, and more particularly to a system and method for monitoring a level of a fluid using acoustic signals.
[0002] Monitoring a liquid level is of great significance in many industrial and domestic applications. For example, in an oil industry application, pumping oil into a storage tank needs to be monitored to prevent oil spillage. In an automotive industry application, a level of a fuel in a vehicle fuel tank is monitored to check if a desired volume of fuel is refilled in the tank or not. In a household application, a water level in a container is monitored to control the flow of water automatically, and thus, preventing the water overflow from the container.
[0003] Conventional liquid level monitoring systems use a wide variety of sensors to monitor the liquid level. For example, a conventional liquid level monitoring system uses an ultrasonic sensor to monitor the liquid level in the container. The ultrasonic sensor can be of an invasive type or a non-invasive type. The invasive type of ultrasonic sensor is placed within the container for monitoring the liquid level, whereas the non-invasive type ultrasonic sensor is mounted on an external surface (e.g., an outer wall) of the container.
[0004] The ultrasonic sensor emits ultrasound signals that are transmitted onto the liquid surface situated in the container and uses reflected signals from the liquid surface to monitor the liquid level within the container. Since the invasive type ultrasonic sensor is exposed to the contents of the container, reliability and performance of the sensor are greatly reduced over time. The non-invasive type ultrasonic sensor may be expensive and may require frequent calibration that adds additional cost to the liquid level monitoring system.
[0005] Another conventional liquid level monitoring system uses a pressure sensor to monitor the liquid level in the container. The pressure sensor operates on the principle of a hydrostatic pressure. The hydrostatic pressure inside a liquid container is linearly proportional to the level of the liquid in the container. Hence, the liquid level in the container can be determined by measuring the fluid pressure in the container. Most conventional pressure sensors are made up of a silicone elastic type material. When a load is applied on such pressure sensors for measuring the liquid level in the container, the silicone elastic type material located between electrodes of the pressure sensors deforms. Deforming of the silicone elastic type material causes the pressure sensors to provide different outputs for the same load over a period.
[0006] Another type of sensor typically used for the liquid level monitoring is an electrical sensor. The electrical sensor operates on the principle of electrical conductivity for measuring a level of conductive liquid such as an inflammable liquid within a tank. The electrical sensor includes two electrodes. The metal wall of the tank acts as a first electrode and an electrical probe is inserted into the tank. When the conductive liquid is not in connection with the electrical probe, the electrical resistance is relatively high between the probe and the metal tank wall. As the level of the liquid raises within the tank, the electrical resistance between the probe and the metal tank wall proportionately decreases. The electrical sensor is typically used to track the level of inflammable liquid within a tank and its usage is not considered safe for household purposes like monitoring a water level in a water filtration container.
[0007] Apart from the above-mentioned sensors, there are also other types of sensors such as a float sensor, a capacitance sensor, an optical sensor, a radar sensor, a wireless sensor, etc. that are used for the liquid level monitoring. However, such sensors are also expensive. Accordingly, there remains a need for a cost effective system and method that avoids usage of expensive sensors for monitoring fluid levels in a holding object.
BRIEF DESCRIPTION
[0008] According to an exemplary aspect of the present specification, a method for monitoring a fluid level is provided. The method includes receiving acoustic signals that are generated when the fluid is being filled in a holding object, extracting one or more acoustic features from the received acoustic signals, and providing one or more inputs including the extracted acoustic features to a machine-learning system having a machine-learning model. The fluid level in the holding object is monitored using the machine-learning model based on the extracted acoustic features and training data that is used to train the machine-learning model. The training data includes the acoustics features corresponding to various levels of the fluid in at least one type of holding object. The fluid level in the holding object is automatically controlled to be within a desired range based on the monitoring information.
[0009] The one or more acoustic features include one or more mel-frequency cepstral coefficients, spectral energy, spectral entropy, spectral flatness, and spectral flux. Values associated with the extracted acoustic features may vary based on a fluid level in the holding object. Acoustic signals that are generated when the fluid is being filled to one or more selected levels in different types of the holding object may be received. The different types of the holding object may include different shapes, sizes, volumes, and constituent materials corresponding to the holding object. Each of the received acoustic signals may be fragmented into a plurality of acoustic frames. Each acoustic frame may be of a designated frame size. The one or more acoustic features may be extracted from each of the acoustic frames. The training data may be generated to train the machine-learning model based on the fragmented acoustic frames. The training data may include one or more sets of extracted acoustic features and a fluid level corresponding to each of the sets.
[0010] Values of the extracted acoustic features in the training data may be normalized to be within a selected range to obtain normalized training data comprising normalized acoustic features. The normalized values associated with at least one type of acoustic feature selected from the extracted acoustic features may be fitted to a smooth curve using a spline function to obtain a smooth splined acoustic feature. The normalized training data may be updated to smooth splined training data based on the smooth splined acoustic feature. The machine-learning model may be generated based on the smooth splined training data.
[0011] The fragmented acoustic frames and the acoustic features corresponding to the acoustic frames may be segmented into training data and validation data. The training data may be used to generate the machine-learning model. The validation data may be used to validate the generated machine-learning model. The training data may be segmented into at least first training dataset and second training dataset. The first training dataset includes sets of acoustic features that may be used to monitor the fluid level in the holding object in a first range. The second training dataset includes other sets of acoustic features that may be used to monitor the fluid level in the holding object in a second range.
[0012] The generated machine-learning model includes a machine-learning algorithm that may be trained with both the first training dataset and the second training dataset to monitor the fluid level in the first range and the second range. The generated machine-learning model includes a first machine-learning algorithm that may be trained with the first training dataset to monitor the fluid level in the first range and a second machine-learning algorithm that may be trained with the second training dataset to monitor the fluid level in the second range. First validation dataset including a set of acoustic features that corresponds to a fluid level in the first range may be provided as an input to the generated machine-learning model.
[0013] A difference between the fluid level identified by the generated machine-learning model based on the first validation dataset and an actual fluid level may be determined and identified if the difference is within a first defined threshold value. A performance of the generated machine-learning model in identifying the fluid level in the first range may be validated based on the determined difference. Second validation dataset including a set of acoustic features that corresponds to a fluid level in the second range may be provided as an input to the generated machine-learning model. A difference between the fluid level identified by the generated machine-learning model based on the second validation dataset and an actual fluid level may be determined and identified if the difference is within a second defined threshold value. The second defined threshold value may be lesser than the first defined threshold value. A performance of the generated machine-learning model in identifying the fluid level in the second range may be validated based on the determined difference.
[0014] The training data may be updated by fitting normalized values associated with another acoustic feature that is different from a previously selected acoustic feature to a smooth curve using a spline function to obtain updated training data having an updated smooth splined acoustic feature. The update of the training data may be performed when the performance of the generated machine-learning model in identifying the fluid level in at least one of the first range and the second range is not acceptable. An updated machine-learning model may be generated based on the updated training data.
[0015] The generated machine-learning model may be updated by changing a previously used machine-learning algorithm to monitor the fluid level in the first range by another machine-learning algorithm when the performance of the previously used machine-learning algorithm is not acceptable in identifying the fluid level in the first range. The generated machine-learning model may be updated by changing a previously used machine-learning algorithm to monitor the fluid level in the second range by another machine-learning algorithm when the performance of the previously used machine-learning algorithm is not acceptable in identifying the fluid level in the second range.
[0016] Values associated with the extracted acoustic features may be normalized to be within a selected range to obtain normalized acoustic data. The normalized values associated with a specific type of acoustic feature that is smooth splined during generation of the machine-learning model may be fitted to a smooth curve using a spline function to obtain smooth splined acoustic data. The smooth splined acoustic data may be provided as the input to the machine-learning model to monitor the fluid level in the holding object. A fluid inflow into the holding object may be stopped when the fluid level in the holding object reaches a desired level.
[0017] According to another exemplary aspect of the present specification, a system for monitoring a fluid level is provided. The system includes an acoustic signal-receiving device and a processing device that is operatively coupled to the acoustic signal-receiving device. The acoustic signal-receiving device receives one or more acoustic signals that are generated when a fluid is being filled in a holding object. The processing device includes an acoustic signal processing system and a machine learning system having a machine-learning model. The acoustic signal processing system is configured to extract one or more acoustic features from the received acoustic signals. The machine-learning model is configured to monitor the fluid level in the holding object based on the extracted acoustic features and training data that is used to train the machine-learning model. The training data includes the acoustics features corresponding to various levels of the fluid in at least one type of the holding object.
[0018] The acoustic feature processing system may be further configured to normalize values associated with the extracted acoustic features to be within a selected range to obtain normalized acoustic data. Normalized values associated with a specific type of acoustic feature that is smooth splined during generation of the machine-learning model may be fitted to a smooth curve using a spline function to obtain smooth splined acoustic data. The smooth splined acoustic data may be provided as an input to the machine-learning model to monitor the fluid level in the holding object. The system may further include an acoustic chamber inside which the acoustic signal-receiving device is placed. The acoustic chamber may be a noise-proof chamber that prevents undesired acoustic signals in the surrounding environment from entering into the acoustic chamber. The acoustic signal-receiving device may be a microphone.
[0019] The system further includes a fluid flow control device that is communicatively coupled to the processing device and is configured to control a fluid inflow into the holding object based on the monitored information. The system may include a home water dispensing unit, a beverage dispensing unit, a fuel dispensing unit, a fluid storage unit, a water treatment and purification unit, and a chemical treatment unit. The system may further include a communication unit communicatively coupled to a remote device that may be configured to control the fluid level in the holding object. The system further includes an output device communicatively coupled to one or more of the processing device, the communication unit, and the remote device. The communication unit is configured to transmit one or more of alerts and operational information to one or more of the output device and the remote device based on the fluid level in the holding object.
DRAWINGS
[0020] These and other features, aspects, and advantages of the claimed subject matter will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:
[0021] FIG. 1 is a schematic diagram illustrating an exemplary fluid level monitoring system, according to one embodiment of the present disclosure;
[0022] FIG. 2A and FIG. 2B depict a flow diagram illustrating an exemplary method for generating a machine-learning model that is configured to monitor fluid levels in a holding object using the system of FIG. 1;
[0023] FIG. 3A is an exemplary graphical representation depicting variations in values of spectral entropy for a particular acoustic frame selected from acoustic signals used by the system of FIG. 1 for monitoring fluid levels in holding objects;
[0024] FIG. 3B is another exemplary graphical representation depicting variations in values of spectral energy for a particular acoustic frame selected from acoustic signals used by the system of FIG. 1 for monitoring fluid levels in holding objects;
[0025] FIG. 3C is another exemplary graphical representation depicting a first smooth curve and a second smooth curve that are obtained after applying the smoothing spline method over the spectral entropy of FIG. 3A and the spectral energy of FIG. 3B, respectively;
[0026] FIG. 4 is a flow diagram illustrating an exemplary method for monitoring the fluid level in the holding object in real-time using the fluid level monitoring system of FIG. 1;
[0027] FIG. 5A is an exemplary graphical representation illustrating the performance of the machine-learning model of FIG. 2B that is generated without applying the smoothing spline method over a particular acoustic feature;
[0028] FIG. 5B is another exemplary graphical representation illustrating the performance of the machine-learning model of FIG. 2B that is generated by applying the smoothing spline method over a particular acoustic feature; and
[0029] FIG. 5C is yet another exemplary graphical representation illustrating the performance of the machine-learning model of FIG. 2B that is generated by applying the smoothing spline method and by tuning one or more hyper-parameters of the machine-learning model.
DETAILED DESCRIPTION
[0030] The following description presents exemplary systems and methods for monitoring a fluid level in a holding object. Particularly, embodiments described herein disclose systems and methods for monitoring the fluid level in the holding object based on acoustic signals that are generated when the fluid is being filled in the holding object. It may be noted that different embodiments of the present fluid level monitoring system may be used to monitor fluid level in many industrial applications such as in the oil industry, automotive industry, water treatment and purification industry, chemical treatment industry, and gas industry. The fluid level monitoring system may also be used to monitor the fluid level in a household application, for example, for monitoring a beverage level (e.g., a water or coffee level) in a container and for automatically stopping the water inflow when the water in the container reaches a desired level. However, for clarity of explanation, the fluid level monitoring system will be described herein only with reference to the household application in which the fluid level in a holding object is continuously monitored and the fluid inflow is controlled to fill the holding object only up to a desired level, as described in detail with reference to FIG. 1.
[0031] FIG. 1 is a schematic diagram illustrating an exemplary fluid level monitoring system (100). The fluid level monitoring system (100) includes a fluid source (102), a fluid mobilizer (104), an acoustic chamber (106), an acoustic signal-capturing device (108), a holding object (110), an embedded device (112), and a flow control device (114). In one embodiment, the fluid source (102) is a fluid storage medium that stores the fluid. The fluid source (102) is operatively coupled to the fluid mobilizer (104) that receives the fluid from the fluid source (102) through a channel (116). In certain embodiments, the channel (116) is a pipe that interconnects the fluid source (102) and the fluid mobilizer (104), whereas the fluid mobilizer (104) is a pump. The fluid mobilizer (104) pumps the fluid received from the fluid source (102) into the acoustic chamber (106) through another channel (118).
[0032] The acoustic chamber (106) includes an inlet (120) through which the fluid flows into the acoustic chamber (106) from the fluid mobilizer (104) and an outlet (122) through which the fluid flows out to the holding object (110). In one embodiment, the acoustic chamber (106) is a sealed chamber that prevents entering of air from the surroundings into the acoustic chamber (106). The acoustic chamber (106) is also a noise-proof chamber that does not allow external noise signals entering into the acoustic chamber (106). The fluid, thus flowing out through the outlet (122) of the acoustic chamber (106), passes through a channel (124) and fills the holding object (110). As the fluid is being filled in the holding object (110), acoustic signals are generated. The fluid level monitoring system (100) records the generated acoustic signals using the acoustic signal-capturing device (108) and uses the recorded acoustic signals for monitoring the fluid level in the holding object (110). An example of the acoustic signal-capturing device (108) is a microphone.
[0033] In certain embodiments, the acoustic signal-capturing device (108) is placed within the acoustic chamber (106) and may be in direct contact with the fluid in the acoustic chamber (106). In an alternative embodiment, however, a protective casing may be provided for the acoustic signal-capturing device (108) such that the acoustic signal-capturing device (108) may not be in contact with the fluid in the acoustic chamber (106). In one embodiment, the acoustic signals, generated when the fluid is being filled in the holding object (110), are transmitted to the acoustic chamber (106) through the channel (124) and are recorded by the acoustic signal-capturing device (108).
[0034] In an embodiment, the acoustic signal-capturing device (108) transmits the recorded acoustic signals to the embedded device (112) through a conductive medium (125). An example of the conductive medium (125) is one or more electrical wires. The embedded device (112) corresponds to a processing device, including but not limited to, one or more general-purpose processors, specialized processors, graphical processing units, microprocessors, programming logic arrays, field programming gate arrays, and/or other suitable computing devices. Particularly, in certain embodiments, the embedded device (112) includes an acoustic signal processing system (126) and a machine-learning system (128). The embedded device (112) receives the acoustic signals recorded by the acoustic signal-capturing device (108) and provides the recorded acoustic signals as inputs to the acoustic signal processing system (126).
[0035] The acoustic signal processing system (126) continuously extracts one or more acoustic features from the acoustic signals. Examples of the extracted acoustic features include but are not limited to one or more mel-frequency cepstral coefficients, spectral energy, spectral entropy, spectral flatness, and spectral flux. In certain embodiments, the extracted acoustic features may serve as signatures or indicators of the fluid level in the holding object (110). Values associated with these acoustic features vary depending on a level of the fluid in the holding object (110). For example, values of the acoustic features extracted from the acoustic signals that are generated when 10 percent of the holding object is filled differ from values of the acoustic features extracted from the acoustic signals that are generated when 11 percent of the holding object is filled. Thus, the fluid level monitoring system (100) uses the extracted acoustic features for monitoring the fluid level in the holding object (110).
[0036] In one embodiment, the acoustic signal processing system (126) further processes the extracted acoustic features using the method described in greater detail with respect to FIG. 2A and provides the processed acoustic features as one or more inputs to the machine-learning system (128). In certain embodiments, the machine-learning system (128) includes a machine-learning model (130) that is generated and is trained to monitor the fluid level in the holding object (110) based on the processed acoustic features and training data that is used to train the machine-learning model (130), as described in greater detail with respect to FIG. 2.2A and FIG. 2B. The fluid levels in the holding object (110), thus identified by the machine-learning system (128), are provided as inputs to the flow control device (114).
[0037] In certain embodiments, the flow control device (114) is operatively coupled to the embedded device (112) and obtains the fluid level information in the holding object (110) from the embedded device (112). Further, the flow control device (114) is configured to stop the fluid inflow into the holding object (110) when the fluid level in the holding object (110) reaches a desired level. In one embodiment, the flow control device (114) is a control relay circuit that opens and/or closes a solenoid valve (not shown in FIG. 1), which is kept at a desired location in a fluid flow path from the fluid source (102) to the holding object (110) for allowing or stopping the fluid inflow into the holding object (110).
[0038] In one embodiment, the fluid level monitoring system (100) further includes a communication unit (132) that is operatively coupled to one or more of the embedded device (112) and to the flow control device (114). When the fluid is being filled in the holding object (110) in real-time, the communication unit (132) obtains the fluid level information in the holding object (110) from the embedded device (112) and sends one or more status messages to a remote device (134) and/or an output device (136) such as an audio-video display unit to provide real-time fluid level information to a user associated with the remote device (134). The status messages may be sent to the remote device (134) and/or the output device (136) at regular designated time intervals or continuously. Alternatively, the communication unit (132) sends an alert message to the remote device (134) and/or the output device (136) to notify or alert the user about the fluid level in the holding object (110) before the fluid level in the holding object (110) reaches a desired level.
[0039] In one embodiment, the communication unit (132) communicates with the remote device (134) via a communication network, for example, a Wi-Fi network, an Ethernet, and a cellular data network. Examples of the remote device (134) include a cellular phone, a laptop, a tablet-computing device, and a desktop computer. In one embodiment, the user remotely controls the fluid level in the holding object (110) using an application residing on the remote device (134). For example, the user may provide one or more actionable inputs using the remote device (134) to control the fluid flow in the holding object (110) prior to initiation or during the flow of the fluid into the holding object (110). Examples of the actionable user inputs include an indication to stop the fluid flow, change a rate at which the fluid currently fills the holding object (110), set a desired volume of fluid to be filled, set a desired instant of time or a period of time for filling fluid in the holding object (110), etc. The communication unit (132) receives the one or more actionable inputs from the remote device (134) and provides the received inputs to the flow control device (114) in order to control the fluid flow based on the received inputs. A methodology employed for generating the machine-learning model (130) that is configured to monitor the fluid level in the holding object (110) is described in detail with reference to FIG. 2A.
[0040] FIG. 2A is a flow diagram (200) illustrating an exemplary method for generating the machine-learning model (130) of the machine-learning system (128) that is configured to monitor the fluid level in the holding object (110) of FIG. 1. In certain embodiments, the machine-learning model (130) is generated offline, and subsequently the generated machine-learning model (130) is used for monitoring the fluid level in real-time. For generating the machine-learning model (130), the fluid level monitoring system (100) described with respect to FIG. 1 can be used except that the embedded device (112) in a model generation scenario acts as a storage device for storing the acoustic signals. The order in which the exemplary method (200) is described is not intended to be construed as a limitation, and any number of the described blocks may be combined in any order to implement the exemplary method disclosed herein, or an equivalent alternative method. Additionally, certain blocks may be deleted from the exemplary method or augmented by additional blocks with added functionality without departing from the spirit and scope of the subject matter described herein.
[0041] At step (202), the acoustic signal processing system (126) receives acoustic signals that are generated when the fluid is being filled up to a plurality of selected levels in at least one type of the holding object (110) from the acoustic signal-capturing device (108. In one embodiment, when the acoustic signal-capturing device (108) records the generated acoustic signals, a sampling rate of 44100 Hz is maintained. When generating the machine-learning model (130), the acoustic signals generated when the fluid is being filled in various types of containers up to various selected levels are recorded and a plurality of audio files are generated based on the recorded acoustic signals.
[0042] For example, acoustic signals generated when the fluid is being filled in a first type of container up to a first level (e.g., 1 percent of the container volume) are recorded and a first audio file is generated based on the recorded acoustic signals. Examples of the types of containers include plastic containers, glass containers, metal containers, and paper containers. Further, the containers may be of different shapes and sizes, and accordingly the containers may have different volumes. In another example, acoustic signals generated when the fluid is being filled in the first type of container up to a second level (e.g., 5 percent of the container volume) are recorded and a second audio file is generated based on the acoustic signals. In yet another example, acoustic signals generated when the fluid is being filled in a second type of container up to a first level (e.g., 10 percent of the container volume) are recorded and a third audio file is generated based on the acoustic signals.
[0043] Similarly, it is to be understood that, the plurality of audio files are generated based on the acoustic signals generated when the fluid is being filled in various types of containers up to various selected levels. The generated audio files having the acoustic signals are then used to generate the machine-learning model (130) such that the machine-learning model (130) monitors the fluid at any level in any type of containers in real-time.
[0044] At step (204), the acoustic signals received from the acoustic signal-capturing device (108) are fragmented into a plurality of acoustic frames. More specifically, each of the generated audio files having the acoustic signals that are fragmented into the plurality of acoustic frames, and each of the acoustic frames is of a designated frame size. In one embodiment, the designated frame size is 1024 and hence each acoustic frame has 1024 audio samples. For example, a 10 seconds audio file including the acoustic signals may be fragmented into 10 acoustic frames, and hence each acoustic frame length is one-second. Each one-second acoustic frame may have 1024 audio samples. In certain embodiments, the designated frame size is kept at 1024 for extracting the acoustic features from the acoustic frames such that there is a 50 percent overlap between two subsequent acoustic frames. For example, while extracting a first set of acoustic features from a first acoustic frame, acoustic information from the first second is considered. Whereas, while extracting a second set of acoustic features, acoustic information from 0.5 second to 1.5 seconds is considered.
[0045] At step (206), one or more acoustic features are extracted from each of the acoustic frames. The acoustic signal processing system (126) reads the audio files having the fragmented audio frames at the sampling rate (e.g., 44100 Hz), which is same as the sampling rate maintained at the time of recording the generated acoustic signals. The audio files having the fragmented audio frames are subjected to a window function such that less side lobes are generated when compared to main lobes. As noted previously, the extracted acoustic features from the acoustic frames, include but are limited to, one or more mel-frequency cepstral coefficients, spectral energy, spectral entropy, spectral flatness, and spectral flux. A methodology or a technique used to determine a value associated with each of the acoustic features is described in the subsequent paragraphs.
[0046] The mel-frequency cepstral coefficients (MFCCs) are coefficients that collectively make up a mel-frequency cepstrum (MFC) that is a representation of the short-term power spectrum of an acoustic signal. In one embodiment, the MFCCs are derived from each of the fragmented acoustic frames by performing a digital cosine transform on a mel-signal. It is to be noted that transformation of the mel-signal based on the digital cosine transform ensures that the acoustic signals under consideration have a representation of spectrum that is sensitive in a way similar to how human hearing works.
[0047] Further, in certain embodiments, while transforming the mel-signal using the digital cosine transform, 40 cepstral coefficients including higher order cepstral coefficients and lower order cepstral coefficients are generated from every acoustic frame of the acoustic signals. In one embodiment, for generating the machine-learning model (130), only 13 cepstral coefficients that correspond to the lower order cepstral coefficients are considered. The remaining cepstral coefficients that correspond to the higher order cepstral coefficients are discarded as those cepstral coefficients were determined as being not critical for monitoring the fluid level in the holding object (110) in real-time. The mel-signal includes filter banks are generally overlapped with each other, and hence acoustic energies associated with the filter banks are correlated with each other. The digital cosine transform is performed on the mel-signal to de-correlate the acoustic energies associated with the filter banks. Thus, diagonal covariance matrices having the mel-frequency cepstral coefficients can be used to generate the machine-learning model (130) using any machine-learning algorithms, for example, using a support vector machine.
[0048] In certain embodiments, a value associated with an acoustic feature, that is, the spectral energy is determined in accordance with equation (1).
E_s={x(t),x(t)}= ?_(- 8)^8¦?\|?x(t)|?^2 dt? (1)
where, E_s represents the spectral energy and x(t) represents the acoustic signals in time-domain representation. Generating the machine-learning model (130) using the spectral energy enables the machine-learning model (130) to correlate the spectral energy to a particular fluid level in the holding object (110) in real-time. The value determined using the equation (1) provides the spectral energy for a window, and hence the spectral energy associated with each of the acoustic frames is obtained by normalizing the determined spectral energy using a corresponding acoustic frame length.
[0049] In one embodiment, another acoustic feature, which is spectral entropy provides data on how differently the spectral energy distributes when the fluid fills various types of holding objects up to various levels. The machine-learning model (130) is generated using the spectral entropy during a model generation scenario such that a generated machine-learning model (130) correlates the spectral entropy to a particular fluid level in the holding object (110) irrespective of a type of the holding object (110) used in real-time. In an embodiment, the spectral entropy is calculated in accordance with equation (2).
PSE= -?_(i=1)^n¦?P_(i ) ln??P_(i ) ? ? (2)
where, PSE corresponds to power spectral entropy, and P_i corresponds to a probability density function that is obtained by normalizing a power density function P(?_i) in accordance with equation (3).
P_i=(P(?_i))/(?_i¦?P(?_i)?) (3)
where, the power density function P(?_i) is obtained using N that represents a length of an acoustic signal, X(?_i ) that represents the acoustic signal, and ?_i represents an angular frequency of the acoustic signal in accordance with equation (4).
P(?_i )=1/N \|?X(?_i )|?^2 (4)
[0050] In certain embodiments, a further acoustic feature that corresponds to spectral flatness is a measure used to characterize an audio spectrum. The machine-learning model (130) is generated using the spectral flatness during a model generation scenario such that a generated machine-learning model (130) correlates the spectral flatness to a particular fluid level in the holding object (110) irrespective of a shape of the holding object (110) used in real-time. The spectral flatness is calculated by dividing the geometric mean of the power spectrum by the arithmetic mean of the power spectrum in accordance with equation (5).
F= v(N&?_(n=0)^(N-1)¦?x(n)?)/((?_(n=0)^(N-1)¦?x(n)?)/N)=exp??(1/N ?_(n=0)^(N-1)¦ln??x(n)? )?/(1/N ?_(n=0)^(N-1)¦?x(n)?) (5)
where, N represents length of an acoustic signal and x(n) represents a magnitude of a bin number n.
[0051] In certain embodiments, an acoustic feature that includes the spectral flux is a measure of how quickly the power spectrum of an acoustic signal is changing. The spectral flux is calculated by comparing the power spectrum for one acoustic frame against the power spectrum from the previous acoustic frame. The machine-learning model (130) is generated using the spectral flux such that the spectral flux assists the machine-learning model (130) to counter any bias that may be introduced in the machine-learning model (130) by the usage of spectral energy. Thus, the acoustic features are extracted from each of the acoustic frames.
[0052] At step (208), the fragmented acoustic frames and the acoustic features corresponding to the acoustic frames are segmented into training data and validation data. The training data is used to generate the machine-learning model (130) and whereas the validation data is used to validate the performance of the generated machine-learning model (130) in identifying the fluid levels in the holding objects. In certain embodiments, the training data includes sets of acoustic features extracted from the acoustic frames and the fluid level corresponding to each set.
[0053] For example, the training data includes a set having values associated with the extracted acoustic features where a first MFCC 1 is 0.132, a second MFCC 2 is -0.004, a thirteenth MFCC 13 is 0.001, the spectral energy is 0.0002 joules, the spectral entropy is 0 bits/nats, the spectral flatness is 0.906 and the spectral flux is -29.2 Hz-1. Values of the acoustic features in the set are represented using certain exemplary units merely for the sake of explanation and it is to be understood that the training data may include the values of the acoustic features in any suitable units. The acoustic features in the exemplary set are extracted from an acoustic frame that corresponds to a specific fluid level in the holding object (110). For example, the acoustic frame may correspond to a fluid level when 10 percent of the holding object (110) is filled. Hence, the training data will include the exemplary set and the fluid level corresponding to the exemplary set as 10 percent of the total volume of the holding object (110). Similarly, it is to be understood that the training data includes a plurality of sets of acoustic features and a fluid level corresponding to each set.
[0054] In certain embodiments, the validation data includes only sets of acoustic features extracted from the acoustic frames and may not include the fluid level corresponding to each set. Once the machine-leaning model (130) is generated using the training data, the validation data including sets of acoustic features are provided as inputs to the machine-learning model (130), and the performance of the machine-learning model (130) in identifying the fluid levels for the given inputs is monitored to validate performance of the generated machine-learning model (130).
[0055] Further, at step (210), the training data is segmented into at least two training datasets for training the machine-learning model (130) to monitor the fluid level across various ranges. For the sake of simplicity, the presently described embodiments herein provide segmentation of the training data into three datasets including first training dataset, second training dataset, and third training dataset. The first training dataset includes sets of acoustic features and the fluid level associated with each of the sets where the fluid level falls within a first range. For example, the sets having the acoustic features, which are all extracted from the acoustic signals generated when the fluid is being filled in the holding objects from zero to 40 percent of the total volume of the holding objects, are categorized as the first training dataset.
[0056] Similarly, the second training dataset includes sets of acoustic features that are all extracted from the acoustic signals generated when the volume of the fluid filled in the holding objects (110) falls in a second range, for example from 40 percent to 80 percent of the total volume of the holding objects. The third training dataset includes sets of acoustic features that are all extracted from the acoustic signals generated when the volume of the fluid filled in the holding objects (110) falls in a third range, for example from 80 percent to the total volume of the holding objects.
[0057] At step (212), the training data including the sets of extracted acoustic features are normalized to obtain normalized training data. More specifically, the values associated with all types of extracted acoustic features are normalized such that the values fall within a selected range, for example, in between 0 to 1. Hence, the values of the extracted acoustic features in the first training dataset, the second training dataset, and the third training dataset are normalized to be within their corresponding selected ranges.
[0058] At step (214), the normalized values associated with one or more specific types of acoustic features are fitted to a smooth curve using a spline function based on a smoothing spline method to obtain smooth splined training data. The smoothing spline is a method of fitting the smooth curve to a set of noisy observations using the spline function and the splining is performed using a controlling parameter lambda (?). In certain embodiments, the lambda (?) is a smoothing parameter that smoothens variations in the values of an acoustic feature and controls trade-off between fidelity and roughness of the values associated with the acoustic feature. A roughness penalty is identified based on a second order derivative and is used to obtain the trade-off. Further, the splining provides a method of balancing a measure of fit with a penalty term based on sums-of-squares. The penalty term may be used as a prior distribution and may be used to estimate the spline based on Bayes theorem.
[0059] Fitting the normalized values associated with an acoustic feature to a smooth curve to smoothen variations in the values of the acoustic feature using the spline function is depicted in FIGS. 3A, 3B, and 3C. FIG. 3A is an exemplary graphical representation (300) depicting variations in the values of the spectral entropy for a particular acoustic frame before applying the smoothing spline method. Similarly, FIG. 3B is another exemplary graphical representation (302) depicting variations in the values of the spectral energy for a particular acoustic frame before applying the smoothing spline method. FIG. 3C is yet another graphical representation (304) depicting a first smooth curve (306) (represented using a solid line in FIG. 3C) that is obtained after applying the smoothing spline method over the spectral entropy of FIG. 3A. Further, FIG. 3C depicts a second smooth curve (308) (represented using a dotted line in FIG. 3C) that is obtained after applying the smoothing spline method over the spectral energy of FIG. 3B. An x-axis of the graphical representation (304) represents a percentage of fluid filled in the holding object (110) and a y-axis of the graphical representation (304) represents the normalized values of the spectral entropy and the spectral energy.
[0060] Application of the smoothing spline method over a particular acoustic feature updates the values of the particular acoustic feature in the training data. For example, when the smoothing spline method is applied over the spectral entropy values, the normalized values of the spectral entropy in the first training dataset, the second training dataset, and the third training dataset are updated based on the spline function. Thus, the first training dataset having sets of normalized acoustic features are updated to first smooth splined dataset having updated values of the spectral entropy but values associated with other acoustic features remain same. Similarly, it is to be understood that the second training dataset and the third training dataset having sets of normalized acoustic features are updated to second smooth splined dataset and third smooth splined dataset, respectively. Throughout the description of various embodiments that are presented herein, the first, second and third smooth splined datasets are collectively referred to as the smooth splined training data.
[0061] Referring back to description of FIG. 2A, at step (216), the machine-learning model (130) is generated based on the smooth splined training data. The machine-learning model (130) is generated using one or more machine-learning algorithms including, but not limited to, support vector machines, cubist and lasso regression, decision tree learning, association rule learning, artificial neural networks, deep learning, inductive logic programming, clustering, bayesian networks, reinforcement learning, representation learning, similarity and metric learning, sparse dictionary learning, and genetic algorithms. The machine-learning model (130) may also be generated using rule-based machine learning and learning classifier systems. In certain embodiments, hyper-parameters associated with the machine-learning model (130) may be finalized after several rounds of cross validation and experimentation. Examples of the hyper-parameters associated with the machine-learning model (130) include a type of networks to be used for regression such as linear networks or radial basis function networks, a learning rate, a number of latent factors in a matrix factorization, a number of hidden layers in a deep neural network, and a number of clusters in a k-means clustering.
[0062] In one embodiment, the machine-learning model (130) is generated using a specific type of machine-learning algorithm. For example, the machine-learning model (130) is generated using a support vector machine. In this scenario, the smooth splined training dataset, including the first smooth splined dataset, the second smooth splined dataset, and the third smooth splined dataset, is provided as input to the support vector machine. Thus, the generated machine-learning model (130) is configured to monitor the fluid level in the holding object (110) using the support vector machine in real-time.
[0063] In another embodiment, the machine-learning-learning model (130) is generated using more than one type of machine-learning algorithms. For example, the machine-learning model (130) may be generated using a support vector machine and a cubist and lasso regression. In this example, the first and second smooth splined datasets may be provided as inputs to the support vector machine, and the third smooth splined dataset may be provided as input to the cubist and lasso regression. Thus, the generated machine-learning model (130) uses the support vector machine for monitoring the fluid level in the first range (e.g., from zero to 40 percent of the total volume of the holding object 110) and the second range (e.g., from 40 percent to 80 percent of the total volume of the holding object 110) in real-time. Further, the generated machine-learning model (130) uses the cubist and lasso regression for monitoring the fluid level in the third range (i.e., from 80 percent to the total volume) in the real time.
[0064] Subsequent to generation of the machine-learning model (130) based on the smooth splined training data and one or more desired machine-learning algorithms, the performance of the generated machine-learning model (130) in identifying the fluid levels in the holding objects is validated based on the validation data as described in detail with reference to FIG. 2B. From the validation data, which is obtained by segmenting the acoustic frames and the acoustic features corresponding to the acoustic frames, first validation dataset including sets of acoustic features are selected. The first validation dataset having the sets of acoustic features is extracted from the acoustic signals generated when the volume of the fluid being filled in the sample holding objects is in the first range (e.g., from zero to 40 percent of the total volume of the sample holding objects.
[0065] Similarly, from the validation data, second validation dataset and third validation dataset are selected. The second validation dataset having sets of acoustic features is extracted from the acoustic signals generated when the volume of the fluid being filled in the sample holding objects is between approximately 40 percent to 80 percent of the total volume of the holding objects. The third validation dataset having sets of acoustic features is extracted from the acoustic signals generated when the volume of the fluid being filled in the sample holding objects is between 80 percent and 100 percent of the total volume of the corresponding holding objects. Thus, the selected first, second and third validation datasets are used to validate the generated machine-learning model (130) as described in the subsequent paragraphs.
[0066] At step (218), a set of acoustic features selected from each of the first validation dataset, the second validation, and third validation dataset are provided as inputs to the generated machine-learning model (130). At step (220), the machine-learning model (130) identifies a corresponding fluid level in the holding object (110) for each of the given inputs based on the training provided to the machine-learning model (130). At step (222), a difference between the identified fluid level and an actual fluid level for each of the given inputs is determined. At step (224), the difference between the identified fluid level and the actual fluid level for each of the given inputs is checked to identify if the difference falls within a corresponding defined threshold. If the difference identified for each of the given inputs falls within the corresponding defined threshold, the performance of the generated machine-learning model (130) in identifying the fluid levels in the holding objects is considered acceptable.
[0067] In one exemplary implementation, a set of acoustic features selected from the first validation dataset may be provided as an input to the machine-learning model (130). An actual fluid level in the holding object (110) is 30 percent of the total volume and a defined acceptable error threshold for monitoring the fluid level in the first range (i.e., 0-40 percent of the total volume) is 1 percent. In this example, for the given set of acoustic features, if the machine-learning model (130) identifies that the fluid level in the holding object (110) is 10 percent of the total volume, then performance of the generated machine-learning model (130) for monitoring the fluid level in the first range may be considered as unacceptable. Similarly, one or more sets of acoustic features selected from the second and third validation datasets are provided as inputs to the machine-learning model (130), and the performance of the machine-learning model (130) in identifying the fluid levels in the second range (i.e., 40-80 percent of the total volume) and the third range (i.e., 80-100 percent volume) is evaluated.
[0068] In certain embodiments, the acceptable error threshold associated with the second range may be selected to be comparatively lesser than the acceptable error threshold associated with the first range. An exemplary acceptable error threshold associated with the second range is 0.5 percent. Further, the acceptable error threshold may be selected to be comparatively lesser for the third range when compared to the first and second ranges because of the fact that enormous error in the identified fluid level in the third range may cause overflow of the fluid from the holding object (110). For example, if the holding object (110) is already filled to 98 percent of its volume and if the machine-learning model (130) identifies that the holding object (110) is filled only up to 90 percent of its volume, the error may lead to overflow of the fluid from the holding object (110). Therefore, an exemplary acceptable error threshold associated with the third range may be 0.2 percent.
[0069] Subsequent to successful validation of the performance of the machine-learning model (130) in identifying the fluid levels across all ranges, at step (226), the validated machine-learning model (130) is saved in the embedded device (112) of FIG. 1 for monitoring the fluid level in the holding object (110) in real-time. In certain embodiments, when the performance of the machine-learning model (130) in identifying the fluid levels across all ranges are not acceptable, a new machine-learning model (130) is generated and is validated before saving the new machine-learning model (130) in the embedded device (112).
[0070] In one embodiment, the new machine-learning model (130) is generated by changing one or more hyper-parameters associated with the failed machine-learning model (130). For example, changing a hyper-parameter includes changing the kernel associated with the failed machine-learning model (130) from the linear networks to the radial basis function networks. In another embodiment, the new machine-learning model (130) is generated by changing the machine-learning algorithm used in the failed machine-learning model (130). For example, when the failed machine-learning model (130) uses the support vector machine for monitoring the fluid level, the new machine-learning model (130) may use the neural networks.
[0071] In certain scenarios, the performance of the machine-learning model (130) may be determined to be acceptable in identifying the fluid levels in certain ranges but not acceptable in identifying the fluid levels in some other ranges. For example, the generated machine-learning model (130) may appropriately identify the fluid level in the first and second ranges with defined acceptable error thresholds but may not appropriately identify the fluid level in the third range. In such scenarios, the smoothing spline method is applied on an acoustic feature that is different from a previously selected acoustic feature. For example, if the smoothing spline method was applied over the spectral entropy previously, the smoothing spline method may be applied over the spectral energy. Accordingly, the training data having the normalized values associated with the spectral energy are updated by fitting the normalized values of the spectral energy to a smooth curve using the spline function, as noted previously, to obtain an updated training data. With the updated training data, the machine-learning model (130) is trained again to monitor the fluid level across all ranges and is validated based on the validation data. Subsequently, the validated machine-learning model (130) is stored in the embedded device (112).
[0072] In another example, the generated machine-learning model (130) uses the support vector machine for monitoring the fluid level in the first range and uses the cubist and lasso regression for monitoring the fluid level in the second and third ranges. The support vector machine may appropriately identify the fluid level in the first range but the cubist and lasso regression may not appropriately identify the fluid level in the second and third ranges. In this example, the generated machine-learning model (130) is updated by using a new machine-learning algorithm (e.g., neural networks) instead of the cubist and lasso regression for monitoring the fluid level in the second and third ranges. The updated machine-learning model (130) is validated, and thus the validated machine-learning model (130) is stored in the embedded device (112) for monitoring the fluid level in different types of holding objects used during the training process across all ranges in real-time as described in detail with reference to description of FIG. 4.
[0073] FIG. 4 is a flow diagram (400) illustrating an exemplary method for monitoring the fluid level in the holding object (110) in real-time using the fluid level monitoring system (100) of FIG. 1. As previously noted with reference to the description of FIG. 1, the fluid passes out of the acoustic chamber (106) and fills the holding object (110). When the fluid is being filled in the holding object (110), the acoustic signals are generated. The acoustic signal-capturing device (108) records the generated acoustic signals and provides the generated acoustic signals as one or more inputs to the acoustic signal processing system (126). Thus, at step (402), the acoustic signal processing system (126) continuously or at regular intervals receives the acoustic signals that are generated when the fluid is being filled in the holding object (110) from the acoustic signal-capturing device (108).
[0074] Further, at step (404), the acoustic signal processing system (126) fragments the received acoustic signals into a plurality of acoustic frames and each of the acoustic frames is of a designated frame size, for example, each acoustic frame has 1024 audio samples. At step (406), the acoustic signal processing system (126) extracts one or more acoustic features from each of the acoustic frames, as previously described with reference to the description of FIG. 2A. Therefore, each of the acoustic frames has a corresponding set of acoustic features. At step (408), the acoustic signal processing system (126) normalizes values of the extracted acoustic features in each set such that the values of the extracted acoustic features fall within a selected range, for example in between 0 to 1. Collective sets of acoustic features having the normalized values are referred herein as normalized acoustic data.
[0075] At step (410), the acoustic signal processing system (126) fits the normalized values associated with a specific type of acoustic feature in each set to a smooth curve using a spline function to obtain smooth splined acoustic data. In one embodiment, the specific type of acoustic feature is same as the acoustic feature upon which the smoothing spline method is applied during generation of the machine-learning model (130). For example, if the smoothing spline method is applied over the spectral entropy during generation of the machine-learning model (130), the smoothing spline method is applied over the same acoustic feature (i.e., the spectral entropy) in real-time as well for monitoring of the fluid level in the holding object (110).
[0076] Application of the smoothing spline method over a selected acoustic feature by fitting the normalized values to the smooth curve updates the normalized values of the selected acoustic feature in each set. Thus, the sets of acoustic features are further updated after application of the smoothing spline method over the selected acoustic feature and such updated sets of acoustic features are referred herein as smooth splined acoustic data. At step (412), the acoustic signal processing system (126) provides the smooth splined acoustic data as an input to the generated machine-learning model (130).
[0077] At step (414), the generated machine-learning model (130) continuously monitors the fluid level when the fluid is being filled in the holding object (110) based on the smooth splined acoustic data and the smooth splined training data that is used to generate the machine-learning model (130). More specifically, the generated machine-learning model (130) identifies, for each set of acoustic features in the smooth splined acoustic data, a particular fluid level in the holding object (110) by identifying a fluid level associated with a same or a similar set of acoustic features in the smooth splined training data.
[0078] For example, a first set of acoustic features selected from the smooth splined acoustic data includes MFCC 1 with an associated value of 0.1, spectral entropy with an associated value of 0.2 bits/nats, spectral energy with an associated value of 0.3 joules, spectral flatness with an associated value of 0.8, and spectral flux with an associated value of 1.0 Hz -1. If the fluid level associated with the same set of acoustic features in the smooth splined training data is 1 percent of the total volume of the holding object (110), then, in real-time, the machine-learning model (130) identifies that the fluid level in the holding object (110) is 1 percent of the total volume based on the first set of acoustic features and the smooth splined training data. Similarly, it is to be understood that as and when the holding object (110) is being filled, the machine-learning model (130) continuously monitors the fluid level in the holding object (110) by identifying a particular fluid level associated with each subsequent set of acoustic features in the smooth splined acoustic data based on the smooth splined training data.
[0079] Once the fluid reaches a desired level in the holding object (110) identified by the machine-learning model (130), at step (416), the flow control device (114) stops the fluid inflow into the holding object (110). In one embodiment, the flow control device (114) stops the fluid inflow by closing a solenoid valve, which is kept at a desired location in the fluid flow path from the fluid source (102) to the holding object (110).
[0080] Throughout description of various embodiments presented herein, application of the smoothing spline method over a particular acoustic feature and tuning of one or more hyper-parameters associated with the machine-learning model (130) improves the performance of the machine-learning model (130) in identifying the fluid levels in the holding objects. FIGs. 5A and 5B illustrate a difference in performance of the machine-learning model (130) in identifying the fluid levels with and without application of the smoothing spline method over a particular acoustic feature.
[0081] In particular, FIG. 5A is a graphical representation (500) illustrating the performance of the machine-learning model (130) in identifying the fluid levels when the machine-learning model (130) is generated without applying the smoothing spline method over a particular acoustic feature. Specifically, the graphical representation (500) depicts a comparison between the fluid levels (502) identified by the machine-learning model (130) for given inputs of validation datasets and the actual fluid levels (504). As evident from the depictions of FIG. 5A, there are a lot of deviations between the identified fluid levels (502) and the actual fluid levels (504) when the machine-learning model (130) is generated without applying the smoothing spline method over an acoustic feature.
[0082] FIG. 5B is an exemplary graphical representation (506) illustrating the performance of the machine-learning model (130) in identifying the fluid levels when the machine-learning model (130) is generated by applying the smoothing spline method over a particular acoustic feature. As evident from the depictions of FIG. 5B, the deviations between the identified fluid levels (508) and the actual fluid levels (510) are minimal when compared to the FIG. 5A.
[0083] FIG. 5C is another exemplary graphical representation (512) illustrating the performance of the machine-learning model (130) in identifying the fluid levels when the machine-learning model (130) is generated by applying the smoothing spline method over a particular acoustic feature and also by tuning one or more hyper-parameters of the machine-learning model (130). In this scenario, the deviations between the identified fluid levels (514) and the actual fluid levels (516) are very minimal when compared to FIG. 5A and FIG. 5B. Hence, the performance of the machine-learning model (130) in identifying the fluid levels improves when applying the smoothing spline method over a particular acoustic feature and by tuning one or more hyper-parameters of the machine-learning model (130).
[0084] The fluid level monitoring system (100) described herein uses the machine-learning system (128) having the machine-learning model (130) for monitoring the fluid level in different types of holding objects. Unlike the existing systems that use expensive sensors such as ultrasonic sensors, optical sensors, and wireless sensors, etc. that may require frequent calibrations, the fluid level monitoring system (100) uses the machine-learning model (130) that makes the fluid level monitoring system (100) comparatively less expensive. Further, the machine-learning model (130) uses a plurality of acoustic features for monitoring the fluid level in the holding object (110), which improves the accuracy and the performance of the machine-learning model (130) in identifying the fluid levels.
[0085] Although specific features of various embodiments of the present systems and methods may be shown in and/or described with respect to some drawings and not in others, this is for convenience only. It is to be understood that the described features, structures, and/or characteristics may be combined and/or used interchangeably in any suitable manner in the various embodiments shown in the different figures.
[0086] While only certain features of the present systems and methods have been illustrated and described herein, many modifications and changes will occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the claimed invention.
| # | Name | Date |
|---|---|---|
| 1 | 201741028621-STATEMENT OF UNDERTAKING (FORM 3) [11-08-2017(online)].pdf | 2017-08-11 |
| 2 | 201741028621-POWER OF AUTHORITY [11-08-2017(online)].pdf | 2017-08-11 |
| 3 | 201741028621-FORM 18 [11-08-2017(online)].pdf | 2017-08-11 |
| 5 | 201741028621-DRAWINGS [11-08-2017(online)].pdf | 2017-08-11 |
| 6 | 201741028621-DECLARATION OF INVENTORSHIP (FORM 5) [11-08-2017(online)].pdf | 2017-08-11 |
| 7 | 201741028621-COMPLETE SPECIFICATION [11-08-2017(online)].pdf | 2017-08-11 |
| 8 | Form5_After Filing_22-06-2018.pdf | 2018-06-22 |
| 9 | Form1_After Filing_22-06-2018.pdf | 2018-06-22 |
| 10 | Correspondence by Agent_Submission of Document_22-06-2018.pdf | 2018-06-22 |
| 11 | Correspondence by Agent_Power of Attorney_22-06-2018.pdf | 2018-06-22 |
| 12 | 201741028621-FER.pdf | 2020-07-15 |
| 13 | 201741028621-FORM 3 [15-01-2021(online)].pdf | 2021-01-15 |
| 14 | 201741028621-FER_SER_REPLY [15-01-2021(online)].pdf | 2021-01-15 |
| 15 | 201741028621-ENDORSEMENT BY INVENTORS [15-01-2021(online)].pdf | 2021-01-15 |
| 16 | 201741028621-DRAWING [15-01-2021(online)].pdf | 2021-01-15 |
| 17 | 201741028621-CLAIMS [15-01-2021(online)].pdf | 2021-01-15 |
| 18 | 201741028621-PatentCertificate07-12-2023.pdf | 2023-12-07 |
| 19 | 201741028621-IntimationOfGrant07-12-2023.pdf | 2023-12-07 |
| 1 | searchE_10-06-2020.pdf |