Abstract: ABSTRACT METHOD AND SYSTEM FOR OPTIMIZING MACHINE LEARNING MODEL FOR PREDICTING ENGINE MALFUNCTION Disclosed herein is a method for optimizing machine learning model for predicting engine malfunction. The method encompasses: receiving, at predefined intervals, a plurality of data files from one or more sensors placed in and around the engine, wherein the plurality of data files corresponds to engine parameters and vehicle parameters indicative of health of the engine; quantizing each of the received data files by selecting a maximum and minimum average value associated with the engine parameters and vehicle parameters; converting the engine parameters and vehicle parameters selected in the quantized data files from a first data format to a second data format to reduce the size of the quantized data files; storing the converted data files for future reference to predict engine malfunction; fetching the stored data files to perform computations for predicting engine malfunction, in an event the memory of a detection unit is idle from performing other tasks. FIG. 1
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION (See section 10, rule 13)
“METHOD AND SYSTEM FOR OPTIMIZING MACHINE LEARNING MODEL FOR PREDICTING ENGINE MALFUNCTION”
TATA MOTORS LIMITED of Bombay House, 24 Homi Mody Street, Hutatma Chowk,
Mumbai 400 001, Maharashtra, India
Nationality: Indian
The following specification particularly describes the invention and the manner in which it is
to be performed.
METHOD AND SYSTEM FOR OPTIMIZING MACHINE LEARNING MODEL FOR PREDICTING ENGINE MALFUNCTION
FIELD OF THE DISCLOSURE
[0001] The present disclosure relates generally to the field of automobiles. More particularly, the present disclosure relates to methods and systems for optimizing machine learning model for predicting engine malfunction.
BACKGROUND
[0002] The following description of related art is intended to provide background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section be used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of prior art.
[0003] Machine learning models provides very efficient and intelligent techniques for various tasks across different domains such as Predictive analysis, Disease detection and diagnosis, Image segmentation, workflow optimization and others based on input parameters fed to the ML model and its training based on the same parameters. Similarly, the machine learning models may be used in vehicles to predict vehicle breakdown and engine malfunction to assist the drivers. These predictions maybe highly useful in a scenario of driving the vehicle over long distances on highways, where roadside assistance may not be available, and the drivers are required to maintain the vehicle on their own for such distances.
[0004] However, running a machine learning model generally requires electronic components with high processing power, as the memory usage of ML model is very high corresponding to the size of the ML model. Therefore, one of the major issues faced by developers is to run the ML model with lesser available memory of the electronic component used in a variety of system. Further, this capability of running
complex ML model with lesser memory unlocks a more cost-effective usage of ML models in day-to-day life.
[0005] Another issue faced while using ML model for predicting vehicle breakdown and engine malfunction is to provide additional dedicated electronic component/microcontroller for ML model, inducing extra cost for the manufacturer. Therefore, there is a need to provide a solution to use optimized ML model for predicting vehicle breakdown and engine malfunction in existing available set ups to avoid any additionally costs incurred.
[0006] Thus, there exists an imperative need in the art to optimizing machine learning model for predicting engine malfunction, which the present disclosure aims to address.
OBJECT OF THE INVENTION
[0007] Some of the objects of the present disclosure, which at least one embodiment disclosed herein satisfies are listed herein below.
[0008] It is an object of the present disclosure to provide a system and a method for optimizing machine learning model for predicting engine malfunction.
[0009] It is another object of the present disclosure to provide a solution that optimizes a machine learning model that runs on a fraction of memory.
[0010] It is yet another object of the present disclosure to provide a solution to predict vehicle breakdown and engine malfunction.
SUMMARY
[0011] The present disclosure overcomes one or more shortcomings of the prior art and provides additional advantages discussed throughout the present disclosure. Additional features and advantages are realized through the techniques of the present disclosure. Other aspects and aspects of the disclosure are described in detail herein.
[0012] In an aspect of the present disclosure, a method for optimizing machine learning model for predicting engine malfunction is disclosed. The method includes receiving, at predefined intervals, a plurality of data files from one or more sensors placed in and around the engine. The received plurality of data files corresponds to engine parameters and vehicle parameters indicative of health of the engine. Further, the method comprises quantizing each of the received data files by selecting a maximum average value and a minimum average value associated with the engine parameters and vehicle parameters. Further the method comprises converting the engine parameters and vehicle parameters selected in the quantized data files from a first data format to a second data format to reduce the size of the quantized data files. The method further comprises storing the converted data files for future reference to predict engine malfunction. Further, the method comprises fetching the stored data files to perform computations for predicting engine malfunction, in an event the memory of a detection unit is idle from performing other tasks.
[0013] In another aspect, the first data format may comprise float data type having a size greater or equal to 32 bits.
[0014] In yet another aspect, the second data format may comprise integer data type having a size of 8 bits.
[0015] In another aspect, the state of the detection unit may be one of an idle and a busy state.
[0016] In another aspect, predicting the engine malfunction the method further comprises determining an engine condition as one of normal condition, check condition, danger condition, and error condition.
[0017] In yet another aspect, the check condition indicates engine having low possibility of malfunctioning, the danger condition indicates engine having high
possibility of malfunctioning, the error condition indicates failure to determine the engine condition.
[0018] In another aspect, a system for optimizing machine learning model for predicting engine malfunction is disclosed. The system comprising a memory, and one or more processors connected to the memory and wherein the one or more processors are configured to receive, at predefined intervals, a plurality of data files from one or more sensors placed in and around the engine. The received plurality of data files corresponds to engine parameters and vehicle parameters indicative of health of the engine. The one or more processors are further configured to quantize each of the received data files by selecting a maximum average value and a minimum average value associated with the engine parameters and vehicle parameters. The one or more processors are further configured to convert, the engine parameters and vehicle parameters selected in the quantized data files from a first data format to a second data format to reduce the size of the quantized data files. The one or more processors are further configured to store, the quantized and converted data files for future reference to predict engine malfunction. The one or more processors are further configured to fetch the stored data files to perform computations for predicting engine malfunction, in an event the memory of a detection unit is idle from performing other tasks.
[0019] In another aspect, the first data format may comprise float data type having a size greater or equal to 32 bits.
[0020] In yet another aspect the second data format may comprise integer data type having a size of 8 bits.
[0021] In another aspect, the state of the detection unit may be one of an idle and a busy state.
[0022] In another aspect, predicting the engine malfunction the one or more processors are further configured to determine an engine condition as one of normal condition, check condition, danger condition, and error condition.
5
[0023] In yet another aspect, the check condition indicates engine having low possibility of malfunctioning, the danger condition indicates engine having high possibility of malfunctioning, the error condition indicates failure to determine the engine condition.
[0024] The foregoing summary is illustrative only and is not intended to be in any way limiting. In addition to the illustrative aspects, aspects, and features described above, further aspects, aspects, and features will become apparent by reference to the drawings and the following detailed description.
BRIEF DESCRIPTION OF DRAWINGS
[0025] The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary aspects and, together with the description, explain the disclosed principles. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the figures to reference features and components. Some aspects of the system and/or methods in accordance with aspects of the present subject matter are now described, by way of example only, and regarding the accompanying figures, in which:
[0026] FIG. 1 illustrates an example representation of an environment depicting optimizing machine learning model for predicting engine malfunction, in accordance with exemplary embodiments of the present disclosure.
[0027] FIG.2A illustrates an exemplary block diagram of a system for optimizing machine learning model for predicting engine malfunction, in accordance with exemplary embodiments of the present disclosure.
[0028] FIG. 2B illustrates an exemplary block diagram of one or more processors, in accordance with exemplary embodiments of the present disclosure.
6
[0029] FIG.3 illustrates an exemplary method flow diagram indicating the process for optimizing machine learning model for predicting engine malfunction, in accordance with exemplary embodiments of the present disclosure.
[0030] FIG. 4 refers to an exemplary scenario depicting reduction in memory usage by the optimized machine learning model for predicting engine malfunctions, in accordance with exemplary embodiments of the present disclosure.
[0031] The foregoing shall be more apparent from the following more detailed description of the disclosure.
[0032] It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative systems embodying the principles of the present subject matter. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudo code, and the like represent various processes which may be substantially represented in computer readable medium and executed by a computer or processor, whether such computer or processor is explicitly shown.
DETAILED DESCRIPTION
[0033] In the present document, the word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any aspect or implementation of the present subject matter described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects. Further, the ensuing description provides “exemplary embodiments” only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment.
[0034] Also, it is noted that individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a
7
sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure.
[0035] While the disclosure is susceptible to various modifications and alternative forms, specific aspect thereof has been shown by way of example in the drawings and will be described in detail below. It should be understood, however, that it is not intended to limit the disclosure to the specific forms disclosed, but on the contrary, the disclosure is to cover all modifications, equivalents, and alternative falling within the scope of the disclosure.
[0036] The terms “comprises”, “comprising”, “includes”, or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a setup, device, or method that comprises a list of components or steps does not include only those components or steps but may include other components or steps not expressly listed or inherent to such setup or device or method. In other words, one or more elements in a system or apparatus proceeded by “comprises… a” does not, without more constraints, preclude the existence of other elements or additional elements in the system or method.
[0037] Further, the system may also comprise a “processor” or “processing unit” or “one or more processor” includes processing unit, wherein processor refers to any logic circuitry for processing instructions. The processor may be a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc. The processor may perform signal coding data processing, input/output processing, and/or any other functionality that enables the working of the system according to the present disclosure. More specifically, the processor is a hardware processor.
8
[0038] In the following detailed description of the aspects of the disclosure, reference is made to the accompanying drawings that form a part hereof, and in which are shown by way of illustration specific aspects in which the disclosure may be practiced. These aspects are described in sufficient detail to enable those skilled in the art to practice the disclosure, and it is to be understood that other aspects may be utilized and that changes may be made without departing from the scope of the present disclosure. The following description is, therefore, not to be taken in a limiting sense.
[0039] Hereinafter, exemplary embodiments of the present disclosure will be described with reference to the accompanying drawings.
[0040] FIG. 1 illustrates an example representation of an environment [100] depicting optimizing machine learning model for predicting engine malfunction, in accordance with some aspect of the present disclosure.
[0041] In an aspect of the present disclosure, the environment [100] includes one or more sensors [202], wherein N is the total number of sensors in the vehicle. The one or more sensors [202] are connected with optimized machine learning model, predicting future engine malfunction and vehicle breakdown. The optimized machine learning model receives plurality of data files comprising plurality of parameters from one or more sensors [202], wherein each data file is indicative of the health of the vehicle. The optimized machine learning model based on received plurality of data files predicts future engine malfunction and vehicle breakdown and showcases to a driver, different conditions of the vehicle. The conditions showcased may be classified as (i). normal condition, when vehicle is predicted to not face any problem, (ii). check condition, when the optimized ML model detects abnormality in some parts and predicts low possibility of damage/accident to the vehicle, (iii). danger condition when the optimized ML model predicts that vehicle breakdown may happen in some parts and there is high possibility of accident, and (iv). error condition when the optimized ML model is unable to predict the engine malfunction and vehicle breakdown due to any error.
9
[0042] Referring to Figure 2A, an exemplary block diagram of a system [200] for optimizing machine learning model for predicting engine malfunction is shown, in accordance with the exemplary embodiments of the present invention. The system [200] comprises at least one or more sensors [202], at least one memory unit [204], at least one or more processors [206], at least one detection unit [208] and at least one display unit [210]. Also, all of the components/ units of the system [200] are assumed to be connected to each other unless otherwise indicated below. Also, in Fig. 1 only a few units are shown, however, the system [200] may comprise multiple such units or the system [200] may comprise any such numbers of said units, as required to implement the features of the present disclosure. Further, in an implementation, the system [200] may be present in a vehicle to implement the features of the present invention. The system [200] may be a part of the vehicle / or may be independent of but in communication with the vehicle or vehicle communication unit (not shown).
[0043] In order to optimizing machine learning model for predicting engine malfunction, the one or more processors [206] of the system [200] are configured to receive, at predefined intervals, a plurality of data files from one or more sensors [202] placed in and around the engine, wherein the plurality of data files corresponds to engine parameters and vehicle parameters indicative of health of the engine.
[0044] In an embodiment of the present disclosure, the one or more sensors [202] of the system [200] may include but not limited to a battery voltage sensor, a gear ratio sensor, a sensor to measure distance covered with Malfunction Indicator Lamp (MIL) on, a first sensor pedal track , a second sensor pedal track , a coolant temperature measuring sensor, a fuel temperature measuring sensor, an atmospheric pressure sensor, an air meter sensor, an air temperature sensor, and an inlet air flow sensor. The one or more sensors [202] provides plurality of data files in predefined intervals via a Control Area Network (CAN) bus (not shown) that allows exchanging data in a reliable and efficient way, wherein each data files comprises multiple data entries over a function of time for each sensor. The one or more processors [206] are further configured to curate the received data that includes creating, organizing and maintaining data sets in the data files after removal of the Null and corrupt data.
10
[0045] In an exemplary scenario, the data files received from the coolant temperature measuring sensor may be curated in data sets such as 200 Fahrenheit, 202 Fahrenheit, 210 Fahrenheit, 205 Fahrenheit, 208 Fahrenheit in the data files and the value reflects the temperature value. Therefore, the plurality of data files from each corresponding sensor is reflective of the health and maintenance conditions of the respective engine parameters and vehicle parameters.
[0046] The one or more processor [206] of the system [200] may then be configured to quantize each of the received data files by selecting a maximum average value and a minimum average value associated with the engine parameters and vehicle parameters. The volume of data files received from the plurality of sensors is so large that it increases the size of the machine learning model, and therefore may only be run on electronic component/microcontroller capable of higher processing ability and large processing memory. Therefore, it is necessary to quantize the received data, wherein maximum and minimum average values over a predefined intervals are retained, and other entries are discarded. This process shrinks the data files size without compromising on the accuracy and performance of the machine learning model.
[0047] The one or more processor [206] of the system [200] may then be configured to convert, the engine parameters and vehicle parameters selected in the quantized data files from a first data format to a second data format to reduce the size of the quantized data files. The quantized data files are further compressed in the conversion process by replacing the data format type of the data files from higher bit data type to a lower bit data type. In an implementation of the present disclosure, the first data format comprises float data type having a size greater or equal to 32 bits and the second data format comprises integer data type having a size of 8 bit. The data formats explained herein are just for the sake exemplary understanding. However, a person skilled in the art may appreciate that the first and the second data formats may include other data types known the art.
11
[0048] The quantization of data files and then the conversion of data type of the data files from higher bit to lower bit, compresses the machine learning module to such a small size, that electronic component/microcontroller with lower processing ability and lower usage memory may execute the ML model efficiently and accurately. The one or more processor [206] of the system [200] may then be configured to store at the memory unit [204], the quantized and converted data files for future reference to predict engine malfunction.
[0049] The one or more processor [206] of the system [200] may then be configured to fetch the stored data files to perform computations for predicting engine malfunction, in an event the memory of a detection unit [208] is idle from performing other tasks. In an implementation of the present disclosure, the one or more processors [206] are further configured to detect a state of the detection unit [208] as one of an idle state and a busy state. In an event the detection unit [208] detects a busy state, the ML code may not be executed resulting in no vehicle breakdown and engine malfunction prediction, wherein in idle state, the ML model is executed to predict the engine malfunction and vehicle breakdown. This process helps in eliminating the need to provide multiple electronic component/microcontrollers in the vehicle or to provide an additional dedicated electronic component/microcontroller for predicting the engine malfunction and vehicle breakdown. Instead, the already existing electronic component/microcontroller may be used with already existing processing ability and available memory usage set up to run the optimized machine learning model for predictions.
[0050] In an implementation of the present disclosure, to predict the engine malfunction, the one or more processors [206] are further configured to determine an engine condition as one of (i) normal condition, (ii) check condition, (iii) danger condition and (iv) error condition. The normal condition is an indicator that the engine may continue to function normally and is not expected to have any malfunction. The check condition is an indicator that there may be abnormality in some parts but with low possibility of damage or accident or a complete engine malfunction. The danger condition is an indicator and prediction that there is a high possibility of vehicle
12
breakdown or engine malfunction and therefore the vehicle must be taken for servicing. The error condition is an indicator that there exists a failure to determine the engine and vehicle condition. Further, there may also be a normal condition indicating that the vehicle may continue to be free from any breakdown and engine malfunction. In an embodiment of the present disclosure, all the vehicle condition may be displayed to a driver via a display unit [210] fitted in a vehicle.
[0051] Referring to figure 2B, an exemplary block diagram of one or more processors [206] is shown in accordance with the exemplary embodiments of the present disclosure.
[0052] In an aspect of the present disclosure, the one or more processors [206] comprises of at least a quantizing unit [206-2], at least a conversion unit [206-4] and at least a ML model [206-6]. The quantizing unit [206-2] is configured to quantize each of the received data files by selecting a maximum average value and a minimum average value associated with the engine parameters and vehicle parameters. The volume of data files received from the plurality of sensors is so large that it increases the size of the machine learning model, and therefore may only be run on electronic component/microcontroller capable of higher processing ability and large processing memory. Therefore, it is necessary to quantize the received data, wherein maximum and minimum average values over a predefined intervals are retained, and other entries are discarded. This step shrinks the data files size without compromising on the accuracy and performance of the machine learning model.
[0053] Further, the conversion unit [206-4] is configured to convert, the engine parameters and vehicle parameters selected in the quantized data files from a first data format to a second data format to reduce the size of the quantized data files. The quantized data files are further compressed in the conversion process by replacing the data format type of the data files from higher bit data type to a lower bit data type. In an implementation of the present disclosure, the first data format comprises float data type having a size greater or equal to 32 bits and the second data format comprises integer data type having a size of 8 bit.
13
[0054] The quantization of data files and then the conversion of data type of the data files from higher bit to lower bit, leads to optimization of the machine learning model to such a small size, that electronic component/microcontroller with lower processing ability and lower usage memory may execute the ML model efficiently and accurately. The one or more processor [206] of the system [200] may then be configured to store at the memory unit [204], the quantized and converted data files for future reference to predict engine malfunction.
[0055] Further, the ML model [206-6] is trained using the plurality of parameters in the plurality of data files received from one or more sensors [202] using selection activation techniques of Tanh and sigmoid, wherein sigmoid is part of the exponential factor of the received from the engine. Further, the ML model [206-6] is configured to predict engine malfunction, in an event the memory of a detection unit [208] is idle and showcases various engine conditions
[0056] In one exemplary embodiment, the ML model [206-6] may identify the engine condition as one of: (i) normal condition, (ii) check condition, (iii) danger condition and (iv) error condition. The normal condition is an indicator that the engine may continue to function normally and is not expected to have any malfunction. The check condition is an indicator that there may be abnormality in some parts but with low possibility of damage or accident or a complete engine malfunction. The danger condition is an indicator and prediction that there is a high possibility of vehicle breakdown or engine malfunction and therefore the vehicle must be taken for servicing. The error condition is an indicator that there exists a failure to determine the engine and vehicle condition. Further, there may also be a normal condition indicating that the vehicle may continue to be free from any breakdown and engine malfunction. In an embodiment of the present disclosure, all the vehicle condition may be displayed to a driver via a display unit [210] fitted in a vehicle.
[0057] Referring to Figure 3 an exemplary method flow diagram [300] for optimizing machine learning model for predicting engine malfunction is shown in accordance
14
with exemplary embodiments of the present disclosure. In an implementation the method is performed by system [200].
[0058] At step [304], the method [300] discloses receiving, at predefined intervals, a plurality of data files from one or more sensors [202] placed in and around the engine, wherein the plurality of data files corresponds to engine parameters and vehicle parameters indicative of health of the engine.
[0059] In an embodiment of the present disclosure, the one or more sensors [202] may include but not limited to a battery voltage sensor, a gear ratio sensor, a sensor to measure distance covered with Malfunction Indicator Lamp (MIL) on, a first sensor pedal track 1, a second sensor pedal track 2, a coolant temperature measuring sensor, a fuel temperature measuring sensor, an atmospheric pressure sensor, an air meter sensor, an air temperature sensor, and an inlet air flow sensor. The one or more sensors [202] provides plurality of data files in predefined intervals via a Control Area Network (CAN) bus (not shown) that allows exchanging data in a reliable and efficient way, wherein each data files comprises multiple data entries over a function of time for each sensor. The method [300] further comprises curating the received data that includes creating, organizing and maintaining data sets in the data files after removal of the Null and corrupt data.
[0060] In an exemplary scenario, the data files received from the coolant temperature measuring sensor may be curated in data sets such as 200 Fahrenheit, 202 Fahrenheit, 210 Fahrenheit, 205 Fahrenheit, 208 Fahrenheit in the data files and the value reflects the temperature value. Therefore, the plurality of data files from each corresponding sensor is reflective of the health and maintenance conditions of the respective engine parameters and vehicle parameters.
[0061] At step [306], the method [300] discloses quantizing each of the received data files by selecting a maximum average value and a minimum average value associated with the engine parameters and vehicle parameters. The volume of data files received from the plurality of sensors is so large that it increases the size of the machine
15
learning model, and therefore may only be run on electronic component/microcontroller higher processing ability and large processing memory. Therefore, it is necessary to quantize the received data, wherein maximum and minimum average values over a predefined intervals are retained, and other entries are discarded. This process shrinks the data files size without compromising on the accuracy and performance of the machine learning model.
[0062] At step [308], the method [300] discloses converting the engine parameters and vehicle parameters selected in the quantized data files from a first data format to a second data format to reduce the size of the quantized data files. The quantized data files are further compressed in the conversion process by replacing the data format type of the data files from higher bit data type to a lower bit data type. In an implementation of the present disclosure, the first data format comprises float data type having a size greater or equal to 32 bits and the second data format comprises integer data type having a size of 8 bit. The data formats explained herein are just for the sake exemplary understanding. However, a person skilled in the art may appreciate that the first and the second data formats may include other data types known the art.
[0063] The quantization of data files and then the conversion of data type of the data files from higher bit to lower bit, compresses the machine learning module to such a small size, that electronic component/microcontroller with lower processing ability and lower usage memory may execute the ML model efficiently and accurately.
[0064] At step [310], the method [300] discloses storing at the memory unit [204], the converted data files for future reference to predict engine malfunction.
[0065] At step [232], the method [300] discloses fetching the stored data files to perform computations for predicting engine malfunction, in an event the memory of a detection unit [208] is idle from performing other tasks. In an implementation of the present disclosure, the method [300] further comprises detecting a state of the detection unit [208] as one of an idle state and a busy state. In an event the detection unit [208] detects a busy state, the ML code may not be executed resulting in no
16
vehicle breakdown and engine malfunction prediction, in idle state, the ML model is executed to predict the engine malfunction and vehicle breakdown. This process helps in eliminating the need to provide multiple electronic component/microcontrollers in the vehicle or to provide an additional dedicated processor for predicting the engine malfunction and vehicle breakdown. Instead, the already existing electronic component/microcontroller may be used with already existing processing ability and available memory usage set up to run the optimized machine learning model for predictions.
[0066] In an implementation of the present disclosure, wherein the predicting of engine malfunction comprises determining an engine condition by the method as one of (i) normal condition, (ii) check condition, (iii) danger condition, and (iv) error condition. The normal condition is an indicator that the engine may continue to function normally and is not expected to have any malfunction. The check condition is an indicator that there may be abnormality in some parts but with low possibility of damage or accident or a complete engine malfunction. The danger condition is an indicator and prediction that there is a high possibility of vehicle breakdown or engine malfunction and therefore the vehicle must be taken for servicing. The error condition is an indicator that there exists a failure to determine the engine and vehicle condition. Further, there may also be a normal condition indicating that the vehicle may continue to be free from any breakdown and engine malfunction. In an embodiment of the present disclosure, all the vehicle condition may be displayed to a driver via a display unit [210] fitted in a vehicle.
[0067] Referring to Figure 4 an exemplary scenario of memory usage by the optimized machine learning model for predicting engine malfunction is shown in accordance with exemplary embodiments of the present disclosure. The exemplary scenarios compare the memory usage by a machine learning model before and after optimization, wherein it becomes possible to run the same model efficiently and effectively after compression with comparatively lower memory usage. Further, in this scenario of the present disclosure, the memory usage of an electronic
17
component/microcontroller by a ML model is 16,54,784 Bits or 16 MB RAM Usage while on the contrary the same ML model after being optimized according to the embodiments of the present disclosure, the memory usage of the same electronic component/microcontroller by the optimized ML model is reduced to 3,85,024 or 3 MB RAM Usage. Therefore, a smaller electronic device with tiny processing ability and lower available memory usage may be able to run a complex machine learning model after it is optimized using the embodiments of the present disclosure and provide the same results, accuracy while saving cost.
[0068] As is evident from the above, the present disclosure provides a technically ad-vanced solution for optimizing machine learning model for predicting engine malfunc¬tion. The present disclosure provides:
. optimized Maintenance Schedule of a vehicle, as the servicing activity may be
preplanned based on the prediction by the machine learning module. . cost reduction of Machine Learning algorithms as they module may now be
run by devices with small processing power. . prevents Unexpected Breakdowns in a vehicle.
. efficiently and accurately identifying faults in the vehicle by identifying the sen¬sor which shows faulty data files, and then identifying the corresponding part of the sensor in a vehicle. power consumption is reduced by eliminating the necessity to connect to cloud in real time.
[0069] While considerable emphasis has been placed herein on the disclosed embod-iments, it will be appreciated that many embodiments can be made and that many changes can be made to the embodiments without departing from the principles of the present disclosure. These and other changes in the embodiments of the present disclosure will be apparent to those skilled in the art, whereby it is to be understood that the foregoing descriptive matter to be implemented is illustrative and non-limit-ing.
18
We Claim:
1. A method for optimizing machine learning model for predicting engine malfunction,
the method comprising:
receiving, at predefined intervals, a plurality of data files from one or more sensors placed in and around the engine, wherein the plurality of data files corresponds to engine parameters and vehicle parameters indicative of health of the engine;
quantizing each of the received data files by selecting a maximum average value and a minimum average value associated with the engine parameters and vehicle parameters;
converting the engine parameters and vehicle parameters selected in the quantized data files from a first data format to a second data format to reduce the size of the quantized data files;
storing the converted data files for future reference to predict engine malfunction; and
fetching the stored data files to perform computations for predicting engine malfunction, in an event the memory of a detection unit is idle from performing other tasks.
2. The method as claimed in claim 1, the first data format comprises float data type having a size greater or equal to 32 bits.
3. The method as claimed in claim 1, the second data format comprises integer data type having a size of 8 bit.
4. The method as claimed in claim 1, the method further comprises detecting a state of the detection unit as one of an idle and a busy.
19
5. The method as claimed in claim 1, wherein predicting the engine malfunction the method further comprises determining an engine condition as one of normal condition, check condition, danger condition, and error condition.
6. The method as claimed in claim 5, wherein the check condition indicates engine having low possibility of malfunctioning, the danger condition indicates engine having high possibility of malfunctioning, the error condition indicates failure to determine the engine condition.
7. A system for optimizing machine learning model for predicting engine malfunction, the system comprising:
a memory, and
one or more processors connected to the memory and wherein the one or more
processors are configured to:
receive, at predefined intervals, a plurality of data files from one or more sensors placed in and around the engine, wherein the plurality of data files corresponds to engine parameters and vehicle parameters indicative of health of the engine;
quantize each of the received data files by selecting a maximum average value and a minimum average value associated with the engine parameters and vehicle parameters;
convert, the engine parameters and vehicle parameters selected in the quantized data files from a first data format to a second data format to reduce the size of the quantized data files;
store, the quantized and converted data files for future reference to predict engine malfunction; and
fetch the stored data files to perform computations for predicting engine malfunction, in an event the memory of a detection unit is idle from performing other tasks.
20
8. The system as claimed in claim 7, the first data format comprises float data type having a size greater or equal to 32 bits.
9. The system as claimed in claim 7, the second data format comprises integer data type having a size of 8 bit.
10. The system as claimed in claim 7, the one or more processors are further configured to detect a state of the detection unit as one of an idle and a busy.
11. The system as claimed in claim 7, wherein predicting the engine malfunction the one or more processors are further configured to determine an engine condition as one of normal condition, check condition, danger condition and error condition.
12. The system as claimed in claim 11, wherein the check condition indicates engine having low possibility of malfunctioning, the danger condition indicates engine having high possibility of malfunctioning, the error condition indicates failure to determine the engine condition.
| # | Name | Date |
|---|---|---|
| 1 | 202421025192-STATEMENT OF UNDERTAKING (FORM 3) [28-03-2024(online)].pdf | 2024-03-28 |
| 2 | 202421025192-REQUEST FOR EXAMINATION (FORM-18) [28-03-2024(online)].pdf | 2024-03-28 |
| 3 | 202421025192-FORM 18 [28-03-2024(online)].pdf | 2024-03-28 |
| 4 | 202421025192-FORM 1 [28-03-2024(online)].pdf | 2024-03-28 |
| 5 | 202421025192-DRAWINGS [28-03-2024(online)].pdf | 2024-03-28 |
| 6 | 202421025192-DECLARATION OF INVENTORSHIP (FORM 5) [28-03-2024(online)].pdf | 2024-03-28 |
| 7 | 202421025192-COMPLETE SPECIFICATION [28-03-2024(online)].pdf | 2024-03-28 |
| 8 | 202421025192-Proof of Right [24-04-2024(online)].pdf | 2024-04-24 |
| 9 | Abstract1.jpg | 2024-05-22 |
| 10 | 202421025192-FORM-26 [07-06-2024(online)].pdf | 2024-06-07 |
| 11 | 202421025192-FORM-9 [28-02-2025(online)].pdf | 2025-02-28 |
| 12 | 202421025192-FORM 18A [28-02-2025(online)].pdf | 2025-02-28 |
| 13 | 202421025192-Power of Attorney [03-03-2025(online)].pdf | 2025-03-03 |
| 14 | 202421025192-Form 1 (Submitted on date of filing) [03-03-2025(online)].pdf | 2025-03-03 |
| 15 | 202421025192-Covering Letter [03-03-2025(online)].pdf | 2025-03-03 |
| 16 | 202421025192-FER.pdf | 2025-05-30 |
| 17 | 202421025192-FORM 3 [10-07-2025(online)].pdf | 2025-07-10 |
| 1 | 202421025192_SearchStrategyNew_E_202421025192E_04-04-2025.pdf |