Sign In to Follow Application
View All Documents & Correspondence

Systems And Methods For Training Deep Learning Models In Resource Constraint Devices

Abstract: Conventionally, a generic deep learning (DL) model was trained on old versions of data. It works initially but as a system configuration/product version changes the DL model accuracy drifts. Monitoring the process requires waiting to observe a certain amount of drift in accuracy, and then retrain the model from scratch which leads training cost, time, more computing power and lack of sufficient data for the new configuration. Present disclosure provides systems and methods that reuse most of observational values that were learnt using last version of data. Only the parameters near to an output layer of the DL model are trained and further a small subset of original parameters can be trained. Thus, retraining the DL model is frequently possible resulting into consistent accuracy when implemented in resource constraint devices (e.g., edge devices). [To be published with FIG. 2]

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
26 September 2022
Publication Number
13/2024
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

Tata Consultancy Services Limited
Nirmal Building, 9th floor, Nariman point, Mumbai 400021, Maharashtra, India

Inventors

1. DUTTA, Suvra
Tata Consultancy Services Limited, Block -1B, Eco Space, Plot No. IIF/12 (Old No. AA-II/BLK 3. I.T) Street 59 M. WIDE (R.O.W.) Road, New Town, Rajarhat, P.S. Rajarhat, Dist - N. 24 Parganas, Kolkata 700160, West Bengal, India
2. CHATTOPADHYAY, Tanushyam
Tata Consultancy Services Limited, Block -1B, Eco Space, Plot No. IIF/12 (Old No. AA-II/BLK 3. I.T) Street 59 M. WIDE (R.O.W.) Road, New Town, Rajarhat, P.S. Rajarhat, Dist - N. 24 Parganas, Kolkata 700160, West Bengal, India
3. GHOSH, Shubhrangshu
Tata Consultancy Services Limited, Block -1B, Eco Space, Plot No. IIF/12 (Old No. AA-II/BLK 3. I.T) Street 59 M. WIDE (R.O.W.) Road, New Town, Rajarhat, P.S. Rajarhat, Dist - N. 24 Parganas, Kolkata 700160, West Bengal, India

Specification

Description:FORM 2

THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003

COMPLETE SPECIFICATION
(See Section 10 and Rule 13)

Title of invention:

SYSTEMS AND METHODS FOR TRAINING DEEP LEARNING MODELS IN RESOURCE CONSTRAINT DEVICES

Applicant

Tata Consultancy Services Limited
A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th floor,
Nariman point, Mumbai 400021,
Maharashtra, India

Preamble to the description:
The following specification particularly describes the invention and the manner in which it is to be performed.
TECHNICAL FIELD
[001] The disclosure herein generally relates to training deep learning models, and, more particularly, to systems and methods for training deep learning models in resource constraint devices.

BACKGROUND
[002] In manufacturing, health care, insurance, and aviation industries a very common problem is that the system configuration of a product or the criterion to be monitored changes frequently. It is challenging to evaluate the environment of a product under new configuration using the pre-existing machine learning (ML)/deep learning (DL) model. Normally the pre-existing model raises alarm/alert under such scenarios which in turn is rejected by a service engineer or any human being who monitors it which effectively results into a false alarm. A common practice for the above-mentioned industries is that they update their products very frequently to release a newer version and hence the requirement for the clients/customers of those industries is to evaluate the quality of new product or to detect any anomaly during the process of production without any rigorous training phase to reduce the downtime due to system upgradation. It is therefore evident that the new product and the process of production do not constitute a huge volume of data under new configuration and hence re-training the ML/DL model from scratch with no new observational data would result in inaccurate prediction by the ML/DL model.

SUMMARY
[003] Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems.
[004] For example, in one aspect, there is provided a processor implemented method for training deep learning models in resource constraint devices. The method comprises receiving, via one or more hardware processors, an input change during at least one of (i) an inspection in at least one assembly line, and (ii) a postproduction machine condition monitoring of a manufacturing system, wherein the manufacturing system is configured with a plurality of resource constraint devices and a plurality of sensors; observing, via the one or more hardware processors, a change in one or more observations obtained from the plurality of sensors configured to the manufacturing system based on the input change during at least one of (i) the inspection in at least one assembly line, and (ii) the postproduction machine condition monitoring of the manufacturing system; resetting, via the one or more hardware processors, (i) a last layer and a penultimate layer, or (ii) the last layer of a deep learning model comprised in the manufacturing system by randomly assigning a plurality of initial values to (i) the last layer and the penultimate layer, or (ii) the last layer of the deep learning model comprised in the manufacturing system based on the change in the one or more observations obtained from the plurality of sensors; training, via the one or more hardware processors, the randomly assigned plurality of initial values of (i) the last layer and the penultimate layer, or (ii) the last layer of the deep learning model comprised in the manufacturing system in at least one resource constraint device from the plurality of resource constraint devices with a recent observational data obtained from one or more sensors from the plurality of sensors configured to the manufacturing system, wherein the resource constraint device is associated with the manufacturing system to obtain an intermediatory trained deep learning model; and training, via the one or more hardware processors, all layers of the intermediatory trained deep learning model at the at least one resource constraint device from the plurality of resource constraint devices using the recent observational data obtained from the at least one sensor configured to the manufacturing system.
[005] In an embodiment, the input change during the inspection in the at least one assembly line comprises at least one of (i) a change in a configuration of a plurality of resource constraint devices of the manufacturing system, (ii) a change in an assembling process with no change to one or more machine parts, (iii) a change in a configuration of the plurality of resource constraint devices that have an impact on the assembling process, and (iv) a change in an assembling of the one or more machine parts.
[006] In an embodiment, the input change during postproduction machine condition monitoring comprises at least one of (i) a version upgrade in a manufacturing process of the manufacturing system, and (ii) a usage pattern of one or more specific machine parts.
[007] In an embodiment, the plurality of resource constraint devices is configured to monitor (i) one or more machine parts of a product, and (ii) an assembling process during manufacturing of a product.
[008] In an embodiment, the deep learning model is trained with a historical data and configured in the manufacturing system with a pre-defined accuracy level.
[009] In another aspect, there is provided a processor implemented system for training deep learning models in resource constraint devices. The system comprises: a memory storing instructions; one or more communication interfaces; and one or more hardware processors coupled to the memory via the one or more communication interfaces, wherein the one or more hardware processors are configured by the instructions to: receive an input change during at least one of (i) an inspection in at least one assembly line, and (ii) a postproduction machine condition monitoring of a manufacturing system, wherein the manufacturing system is configured with a plurality of resource constraint devices and a plurality of sensors; observe a change in one or more observations obtained from the plurality of sensors configured to the manufacturing system based on the input change during at least one of (i) the inspection in at least one assembly line, and (ii) the postproduction machine condition monitoring of the manufacturing system; reset (i) a last layer and a penultimate layer or (ii) the last layer of a deep learning model comprised in the manufacturing system by randomly assigning a plurality of initial values to (i) the last layer and the penultimate layer, or (ii) the last layer of the deep learning model comprised in the manufacturing system based on the change in the one or more observations obtained from the plurality of sensors; train the randomly assigned plurality of initial values of (i) the last layer and the penultimate layer, or (ii) the last layer of the deep learning model comprised in the manufacturing system in at least one resource constraint device from the plurality of resource constraint devices with a recent observational data obtained from one or more sensors from the plurality of sensors configured to the manufacturing system, wherein the resource constraint device is associated with the manufacturing system to obtain an intermediatory trained deep learning model; and train all layers of the intermediatory trained deep learning model at the at least one resource constraint device from the plurality of resource constraint devices using the recent observational data obtained from the at least one sensor configured to the manufacturing system.
[010] In an embodiment, the input change during the inspection in the at least one assembly line comprises at least one of (i) a change in a configuration of a plurality of resource constraint devices of the manufacturing system, (ii) a change in an assembling process with no change to one or more machine parts, (iii) a change in a configuration of the plurality of resource constraint devices that have an impact on the assembling process, and (iv) a change in an assembling of the one or more machine parts.
[011] In an embodiment, the input change during postproduction machine condition monitoring comprises at least one of (i) a version upgrade in a manufacturing process of the manufacturing system, and (ii) a usage pattern of one or more specific machine parts.
[012] In an embodiment, the plurality of resource constraint devices is configured to monitor (i) one or more machine parts of a product, and (ii) an assembling process during manufacturing of a product.
[013] In an embodiment, the deep learning model is trained with a historical data and configured in the manufacturing system with a pre-defined accuracy level.
[014] In yet another aspect, there are provided one or more non-transitory machine-readable information storage mediums comprising one or more instructions which when executed by one or more hardware processors cause training deep learning models in resource constraint devices by receiving an input change during at least one of (i) an inspection in at least one assembly line, and (ii) a postproduction machine condition monitoring of a manufacturing system, wherein the manufacturing system is configured with a plurality of resource constraint devices and a plurality of sensors; observing a change in one or more observations obtained from the plurality of sensors configured to the manufacturing system based on the input change during at least one of (i) the inspection in at least one assembly line, and (ii) the postproduction machine condition monitoring of the manufacturing system; resetting (i) a last layer and a penultimate layer, or (ii) the last layer of a deep learning model comprised in the manufacturing system by randomly assigning a plurality of initial values to (i) the last layer and the penultimate layer, or (ii) the last layer of the deep learning model comprised in the manufacturing system based on the change in the one or more observations obtained from the plurality of sensors; training the randomly assigned plurality of initial values of (i) the last layer and the penultimate layer, or (ii) the last layer of the deep learning model comprised in the manufacturing system in at least one resource constraint device from the plurality of resource constraint devices with a recent observational data obtained from one or more sensors from the plurality of sensors configured to the manufacturing system, wherein the resource constraint device is associated with the manufacturing system to obtain an intermediatory trained deep learning model; and training all layers of the intermediatory trained deep learning model at the at least one resource constraint device from the plurality of resource constraint devices using the recent observational data obtained from the at least one sensor configured to the manufacturing system.
[015] In an embodiment, the input change during the inspection in the at least one assembly line comprises at least one of (i) a change in a configuration of a plurality of resource constraint devices of the manufacturing system, (ii) a change in an assembling process with no change to one or more machine parts, (iii) a change in a configuration of the plurality of resource constraint devices that have an impact on the assembling process, and (iv) a change in an assembling of the one or more machine parts.
[016] In an embodiment, the input change during postproduction machine condition monitoring comprises at least one of (i) a version upgrade in a manufacturing process of the manufacturing system, and (ii) a usage pattern of one or more specific machine parts.
[017] In an embodiment, the plurality of resource constraint devices is configured to monitor (i) one or more machine parts of a product, and (ii) an assembling process during manufacturing of a product.
[018] In an embodiment, the deep learning model is trained with a historical data and configured in the manufacturing system with a pre-defined accuracy level.
[019] It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.

BRIEF DESCRIPTION OF THE DRAWINGS
[020] The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles:
[021] FIG. 1 depicts an exemplary system for training deep learning models in resource constraint devices, in accordance with an embodiment of the present disclosure.
[022] FIG. 2 depicts an exemplary flow chart illustrating a method for training deep learning models in resource constraint devices, using the system of FIG. 1, in accordance with an embodiment of the present disclosure.

DETAILED DESCRIPTION OF EMBODIMENTS
[023] Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the scope of the disclosed embodiments.
[024] As mentioned earlier, in manufacturing, health care, insurance, and aviation industries a very common problem is that the system configuration of a product or the criterion to be monitored changes frequently. It is challenging to evaluate the environment of a product under new configuration using the pre-existing machine learning (ML)/deep learning (DL) model. Normally the pre-existing model raises alarm/alert under such scenarios which in turn rejected by a service engineer or any human being who monitors it which effectively results into a false alarm. A common practice for the above-mentioned industries is that they update their products very frequently to release a newer version and hence the requirement for the clients/customers of those industries is to evaluate the quality of new product or to detect any anomaly during the process of production without any rigorous training phase to reduce the downtime due to system upgradation. Conventionally, a generic DL model was trained on old versions of data. It works initially but as the system configuration/product version changes the model accuracy drifts. Team monitoring this process must wait to observe a certain amount of drift in accuracy, and then retrain the model from scratch. The main reason of the wait time to retrain is cost of training in terms of time and computing power and lack of sufficient data for the new configuration. Embodiments of the present disclosure provide systems and methods that reuse most of the parameter values that were learnt using the last version of data. Only the parameters near to the output layer are trained (e.g., (i) last layer and/or (ii) penultimate layer and last layer of a deep learning (DL) model (or machine learning (ML) model). Since the system and method described herein train a very small subset of original parameters, less computing power and time is required and most importantly a lot less data. Thus, monitoring team can afford to retrain the model much more frequently resulting into more consistent accuracy score. This saves stakeholders (e.g., end users, entities such as manufacturing industries, and the like) from complete retraining as the system and method of the present disclosure assume only a small number of resource constraint devices (e.g., edge devices) have changed in the newer version.
[025] Referring now to the drawings, and more particularly to FIGS. 1 through 2, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method.
[026] FIG. 1 depicts an exemplary system 100 for training deep learning models in resource constraint devices, in accordance with an embodiment of the present disclosure. In an embodiment, the system 100 includes one or more hardware processors 104, communication interface device(s) or input/output (I/O) interface(s) 106 (also referred as interface(s)), and one or more data storage devices or memory 102 operatively coupled to the one or more hardware processors 104. The one or more processors 104 may be one or more software processing components and/or hardware processors. In an embodiment, the hardware processors can be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor(s) is/are configured to fetch and execute computer-readable instructions stored in the memory. In an embodiment, the system 100 can be implemented in a variety of computing systems, such as laptop computers, notebooks, hand-held devices (e.g., smartphones, tablet phones, mobile communication devices, and the like), workstations, mainframe computers, servers, a network cloud, and the like.
[027] The I/O interface device(s) 106 can include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like and can facilitate multiple communications within a wide variety of networks N/W and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. In an embodiment, the I/O interface device(s) can include one or more ports for connecting a number of devices to one another or to another server.
[028] The memory 102 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random-access memory (SRAM) and dynamic-random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. In an embodiment, a database 108 is comprised in the memory 102, wherein the database 108 comprises observational data obtained from various sensors of one or more manufacturing systems. The database 108 further comprises one or more machine learning models, one or more deep learning models, and the like. The memory 102 further comprises (or may further comprise) information pertaining to input(s)/output(s) of each step performed by the systems and methods of the present disclosure. In other words, input(s) fed at each step and output(s) generated at each step are comprised in the memory 102 and can be utilized in further processing and analysis. In an embodiment, the system 100 may also be referred as resource constrained device or a computing system/device that is either integrated and configured into a manufacturing system or externally connected to the manufacturing system to perform the method of FIG. 2 described herein.
[029] FIG. 2, with reference to FIG. 1, depicts an exemplary flow chart illustrating a method for training deep learning models in resource constraint devices, using the system 100 of FIG. 1, in accordance with an embodiment of the present disclosure. In an embodiment, the system(s) 100 comprises one or more data storage devices or the memory 102 operatively coupled to the one or more hardware processors 104 and is configured to store instructions for execution of steps of the method by the one or more processors 104. The steps of the method of the present disclosure will now be explained with reference to components of the system 100 of FIG. 1, and the flow diagram as depicted in FIG. 2. In an embodiment, the system 100 may also be referred as resource constrained device or a computing system/device that is either integrated and configured into a manufacturing system or externally connected to the manufacturing system to perform the method of FIG. 2 described herein.
[030] At step 202 of the method of the present disclosure, the one or more hardware processors 104 receive an input change during at least one of (i) an inspection in at least one assembly line and (ii) a postproduction machine condition monitoring of a manufacturing system. The manufacturing system is configured with a plurality of resource constraint devices and a plurality of sensors. Few examples of sensors, include temperature sensor(s), torque sensor(s), force sensor(s), Revolutions Per Minute (RPM) sensor(s), and the like. It is to be understood by a person having ordinary skill in the art or person skilled in the art that configuration of the manufacturing system with the plurality of sensors may vary depending upon the type of the manufacturing system. In other words, a manufacturing system A may be configured with a first set of sensors, and a manufacturing system B that is different from the manufacturing system A (not identical) may be configured with a second set of sensors. So, in one scenario, the first set of sensors and the second set of sensors may be identical in nature. In another scenario, the first set of sensors and the second set of sensors may be different from each other. The plurality of resource constraint devices is configured to monitor (i) one or more machine parts of a product, and (ii) an assembling process during manufacturing of the product. An example of resource constraint device may be a computing device/processing unit and a memory unit attached to a factory machine. Since the machine has to be portable/non-stationary in most cases, one cannot have a powerful computing device/processing unit or large memory. In other cases, some minimalistic computing device/processing unit/memory can be externally attached (or operatively connected or configured) to a legacy machine as those machines may not contain any computing capabilities/memory at all. It is to be understood by a person having ordinary skill in the art or person skilled in the art that such examples of resource constraint device as mentioned above shall not be construed as limiting the scope of the present disclosure.
[031] In the present disclosure, the input change during the inspection in the at least one assembly line comprises at least one of (i) a change in a configuration of a plurality of resource constraint devices (e.g., welding device) of the manufacturing system (e.g., the welding system), (ii) a change in an assembling process with no change to one or more machine parts, (iii) a change in a configuration of the plurality of resource constraint devices that have an impact on the assembling process, and (iv) a change in an assembling of the one or more machine parts. It is to be understood by a person having ordinary skill in the art or person skilled in the art that the above examples of the input change during the inspection in the at least one assembly line from (i) through (iv) shall not be construed as limiting the scope of the present disclosure. In an embodiment of the present disclosure, inspection in the above example for input change may refer to a quality inspection. For instance, for welding process, it can be tensile strength inspection. For fast-moving consumer goods (FMCG) manufacturing, quality inspection may involve determining moisture and sugar level in the final product, and the like. In an embodiment of the present disclosure, assembling line (or also referred as assembly process or assembly line and interchangeably used herein) for FMCG product can be raw material conditioning, cutting, cooking, drying, and the like wherein all of these are automated steps in any large enterprise. Configuration change (or a change in a configuration of the plurality of resource constraint devices that have an impact on the assembling process) may include examples such as, but are not limited to, duration of each step (e.g., previously a specific step was carried out for a specific time interval (e.g., it could be for seconds, etc.), now it may be carried out for another time interval (e.g., say minutes, hours, and the like), a sequence of steps (e.g., previously it was processed as step 1, step 2, and step 3 and so on) and in the subsequent operation it was processed as step 1, step 3, step 2 and so on) or counting of one or more individual steps change in an assembly process (e.g., step 1 was carried out, step 2 was carried out and step 2 was repeated and considered as step 3). Example of assembling process change may include such as sensor values are changing. For instance, temperature has to be capped to a specific range in a new version of a product, whereas an older version had a different set of temperature range. Example of machine part that had no change may include such as torque value does not change significantly compared to an older version of a manufacturing process. In an embodiment of the present disclosure, resource constraint device in a typical realization may be a device (e.g., a mobile communication device, a computer system, a processing unit, edge device, and the like) with attached 1 core CPU and less than 1GB memory as its configuration. It is to be understood by a person having ordinary skill in the art or person skilled in the art that the above example of the resource constraint device shall not be construed as limiting the scope of the present disclosure. Impact may include the manufacturing process, monitoring the associated quality and the like by the resource constrained device. Examples of the change in an assembling of the one or more machine parts includes, but are not limited to, replacing a worn-out tool of a welding device or a welding system with a new identical or similar tool.
[032] In the present disclosure, the input change during postproduction machine condition monitoring comprises at least one of (i) a version upgrade in a manufacturing process of the manufacturing system, (ii) a usage pattern of one or more specific machine parts. It is to be understood by a person having ordinary skill in the art or person skilled in the art that the above examples of the input change during postproduction machine condition monitoring from (i) through (ii) shall not be construed as limiting the scope of the present disclosure. In an embodiment of the present disclosure, examples of the version upgrade in a manufacturing process of the manufacturing system may include, but are not limited to, a new version of a car model released, a new blend of a chewing gum launched, and the like. In an embodiment of the present disclosure, examples of the usage pattern of machine parts may include, but are not limited to, car battery performance in hot and humid climatic conditions by an infrequent driver on a specific terrain (e.g., say city roads), car battery performance in cold climatic conditions by a frequent driver on another terrain (e.g., say mountain roads). It is to be understood by a person having ordinary skill in the art or person skilled in the art that such events are not predictable beforehand on how the car would be used wherein sensors would record different values depending on usages.
[033] The manufacturing system may be a welding system, in one example embodiment. In such scenarios, the resource constraint devices may include a welding device operatively connected to the welding system via one or more communication interfaces (e.g., serial buses, or cables, or wires, etc.). Below Table 1 illustrates examples of received input change in inspection in at least one assembly line.
Table 1
force torque pow rpm vel uts fpr Power
4312.057 8.455 2.47349 1383.196 39.024 191.0587 0.028213 4123.308
8049.933 14.062 2.467242 990.8306 191.86 249.3077 0.193636 4112.892
4876.97 11.392 2.471188 998.8838 44.689 225.5982 0.044739 4119.47
4258.526 7.298 2.460666 1760.552 57.907 197.9676 0.032891 4101.93
3828.188 6.586 2.470202 1773.985 42.343 190.3269 0.023869 4117.827
4125.746 6.586 2.479737 2160.986 60.654 194.1913 0.028068 4133.722
3291.386 5.874 2.469544 2560.387 41.885 178.4046 0.016359 4116.73
4274.974 6.942 2.461652 1765.468 40.398 190.3269 0.022882 4103.574
3845.233 6.23 2.469544 2551.232 34.962 178.4046 0.013704 4116.73
5811.514 17.533 2.460008 599.4709 34.561 210.0918 0.057653 4100.833
1649.579 4.45 2.471517 2560.669 45.032 178.4046 0.017586 4120.019
5279.795 7.565 2.460995 1767.179 59.681 197.9676 0.033772 4102.479
4728.939 7.387 2.458693 1766.838 59.452 197.9676 0.033649 4098.641
3293.18 6.052 2.457378 2170.309 41.485 185.7625 0.019115 4096.449
3998.05 8.366 2.47612 1384.573 41.771 191.0587 0.030169 4127.692
1471.044 4.45 2.468886 2176.492 47.436 171.4773 0.021795 4115.633
4219.051 6.942 2.460008 1768.149 48.752 195.139 0.027572 4100.833
4191.837 7.565 2.461652 1769.137 45.833 195.139 0.025907 4103.574
5509.469 8.366 2.482368 1760.932 146.599 218.3649 0.083251 4138.107
4921.828 11.125 2.472503 998.6472 43.03 225.5982 0.043088 4121.663
4305.179 8.277 2.474147 1380.186 49.324 199.8093 0.035737 4124.403
5309.701 11.481 2.487629 1000.11 44.975 224.5843 0.04497 4146.878
1562.255 4.628 2.472174 1779.55 51.613 195.139 0.029003 4121.114
4934.089 13.528 2.199584 599.8845 61.111 215.4662 0.101871 3666.707
5045.936 13.973 2.19498 599.468 63.801 215.4662 0.106429 3659.032

[034] At step 204 of the method of the present disclosure, the one or more hardware processors 104 observe a change in one or more observations obtained from the plurality of sensors (also referred as amongst the plurality of sensors and interchangeably used herein) configured to the manufacturing system based on the input change during at least one of (i) the inspection in at least one assembly line and (ii) the postproduction machine condition monitoring of the manufacturing system. There may be observational data as shown in Table 1 which is obtained by the system 100 prior to receiving the information depicted in Table 1. In such scenarios, the data shown in Table 1 may be acquired and obtained by the system 100 after some interval. Therefore, there would be change in the one or more observations obtained from the plurality of sensors. For instance, first row in Table 1 – sensor value against force is 4312.057. It may so happen that prior information (or prior observational data) obtained may have sensor value against force as 4310.009. The difference between these 2 values is indicative of the change being observed. It is to be understood by a person of ordinary skill in the art or person skilled in the art that such observational data is (or may be) obtained periodically to observe the change in the one or more observations at various intervals of time.
[035] At step 206 of the method of the present disclosure, the one or more hardware processors 104 reset (i) a last layer and a penultimate layer or (ii) the last layer of a deep learning model comprised in the manufacturing system by randomly assigning a plurality of initial values to (i) the last layer and the penultimate layer or (ii) the last layer of the deep learning model comprised in the manufacturing system based on the change in the one or more observations obtained from the plurality of sensors. In other words, once the change in the one or more observations is observed, the one or more hardware processors 104 randomly assigns a plurality of initial values to (i) a last layer and a penultimate layer or (ii) the last layer of a deep learning model comprised in the manufacturing system, thereby resetting layers of the deep learning model. The resetting of layers as mentioned above may happen (or happens) in a system where the DL model is already deployed (e.g., in the manufacturing system), in an embodiment of the present disclosure. The resetting of layers as mentioned above may happen (or happens) in a system where the DL model is going to be deployed (e.g., in the resource constraint device), in an embodiment of the present disclosure. In the present disclosure, the deep learning model is trained with a historical data and configured in the manufacturing system with a pre-defined accuracy level (e.g., say 70% accuracy level). In other words, the deep learning model may have been undergone prior training with some historical dataset (e.g., dataset and values shown in Table 2) and has an accuracy level of say ‘x %’ (e.g., 70% as mentioned above). It is to be understood by a person having ordinary skill in the art or person skilled in the art the training of deep learning model may vary depending upon its implementation and application deployed in and hence is also subject to accuracy level variation, and such training and accuracy levels shall not be construed as limiting the scope of the present disclosure. The training may be a recurring event depending upon the output being predicted for a given application/implementation. For instance, say the deep learning model has predicted certain values (or outputs) and its prediction quality is indicated by 1 or 0 in Table 2 (e.g., refer columns “Prediction_of_product_quality_by_deep learning model”, and “GT_of_product_quality - domain experts”).
Table 2
force torque pow rpm vel uts fpr Power Prediction_of_
product_quality_by_deep learning model GT_of_
product_quality - domain experts
4312.057 8.455 2.47349 1383.196 39.024 191.0587 0.028213 4123.308 0 0
8049.933 14.062 2.467242 990.8306 191.86 249.3077 0.193636 4112.892 1 0
4876.97 11.392 2.471188 998.8838 44.689 225.5982 0.044739 4119.47 1 0
4258.526 7.298 2.460666 1760.552 57.907 197.9676 0.032891 4101.93 1 0
3828.188 6.586 2.470202 1773.985 42.343 190.3269 0.023869 4117.827 0 0
4125.746 6.586 2.479737 2160.986 60.654 194.1913 0.028068 4133.722 0 0
3291.386 5.874 2.469544 2560.387 41.885 178.4046 0.016359 4116.73 0 1
4274.974 6.942 2.461652 1765.468 40.398 190.3269 0.022882 4103.574 0 1
3845.233 6.23 2.469544 2551.232 34.962 178.4046 0.013704 4116.73 1 1
5811.514 17.533 2.460008 599.4709 34.561 210.0918 0.057653 4100.833 1 1
1649.579 4.45 2.471517 2560.669 45.032 178.4046 0.017586 4120.019 0 1
5279.795 7.565 2.460995 1767.179 59.681 197.9676 0.033772 4102.479 1 1
4728.939 7.387 2.458693 1766.838 59.452 197.9676 0.033649 4098.641 0 1
3293.18 6.052 2.457378 2170.309 41.485 185.7625 0.019115 4096.449 0 0
3998.05 8.366 2.47612 1384.573 41.771 191.0587 0.030169 4127.692 1 0
1471.044 4.45 2.468886 2176.492 47.436 171.4773 0.021795 4115.633 1 1
4219.051 6.942 2.460008 1768.149 48.752 195.139 0.027572 4100.833 1 0
4191.837 7.565 2.461652 1769.137 45.833 195.139 0.025907 4103.574 0 0
5509.469 8.366 2.482368 1760.932 146.599 218.3649 0.083251 4138.107 0 1
4921.828 11.125 2.472503 998.6472 43.03 225.5982 0.043088 4121.663 1 1
4305.179 8.277 2.474147 1380.186 49.324 199.8093 0.035737 4124.403 1 1
5309.701 11.481 2.487629 1000.11 44.975 224.5843 0.04497 4146.878 1 1
1562.255 4.628 2.472174 1779.55 51.613 195.139 0.029003 4121.114 1 0
4934.089 13.528 2.199584 599.8845 61.111 215.4662 0.101871 3666.707 0 0
5045.936 13.973 2.19498 599.468 63.801 215.4662 0.106429 3659.032 0 0
[036] The values under the columns “Prediction_of_product_quality_by_deep learning model”, and “GT_of_product_quality - domain experts” in the above Table 2 show the difference in the quality observed. Hence, the accuracy level in this case is say 56% which is not acceptable as per the above pre-defined accuracy level. In such scenarios, the layers of the deep learning model are randomly assigned with values for training and prediction for subsequent incoming data. Examples of deep learning (DL) models and/or machine learning (ML) models include, Convolutional Neural Networks (CNNs), Long Short-Term Memory Networks (LSTMs), Recurrent Neural Networks (RNNs), Generative Adversarial Networks (GANs), Radial Basis Function Networks (RBFNs), Multilayer Perceptrons (MLPs), Self-Organizing Maps (SOMs), Deep Belief Networks (DBNs), Restricted Boltzmann Machines (RBMs), Autoencoders, linear regression, decision trees, random forest, and XGBoost, and the like. It to be understood by a person of ordinary skill in the art or person skilled in the art that the above-described examples of DL/ML models shall not be construed as limiting the scope of the present disclosure. The expression deep learning (DL) models and/or machine learning (ML) models may also be referred as ‘model’ and interchangeably used herein. It to be understood by a person of ordinary skill in the art or person skilled in the art that the system and method implemented the ML model or the DL model to perform the method of FIG. 2 and such implementation shall not be construed as limiting the scope of the present disclosure. A typical deep learning model has layers such as a convolution layer, a recurrent layer, a max pool layer, a batch normalization layer, and one or more fully connected layers. It is to be understood by a person of ordinary skill in the art or person skilled in the art that the layers of the deep learning model architecture as described above are one of the examples and such example shall not be construed as limiting the scope of the present disclosure. In other words, the deep learning model (or any other machine learning model) may have as many layers as possible, and variations of such architectures can be implemented by the system and method of the present disclosure. Other variation of DL model architecture includes, but not limited to, such as a deep learning model having full connected layers in the beginning, wherein a batch normalization layer can be in the middle and last layers may be full connected layers again.
[037] However, for better understanding of the embodiments of the present disclosure, the system and method have considered the typical deep learning model that has layers such as the convolution layer, the recurrent layer, the max pool layer, the batch normalization layer, and one or more fully connected layers (e.g., say last layer and penultimate layer being fully connected layers). In such deep learning model, the one or more hardware processors 104 randomly assign some initial values to either the last layer or both the penultimate layer and the last layer of the deep learning model. Typically, multiple values are assigned to each fully connected layer of the DL/ML model. Such value assigned is based on neurons present in the layers. There may be one or more neurons present in a layer (e.g., fully connected layer). Referring to step 206, say the last layer and the penultimate layer have values 0.3 and 0.7 respectively. The one or more hardware processors 104 randomly assigns value as 0.5 and modifies the existing value 0.3 and 0.9 instead of 0.7 respectively. It is to be understood by a person of ordinary skill in the art or person skilled in the art that the values may range between 0 and 1 and such randomly assigning of initial values as described above shall not be construed as limiting the scope of the present disclosure.
[038] At step 208 of the method of the present disclosure, the one or more hardware processors 104 train the randomly assigned plurality of initial values of (i) the last layer and the penultimate layer or (ii) the last layer of the deep learning model comprised in the manufacturing system in at least one resource constraint device from the plurality of resource constraint devices (also referred as amongst the plurality of resource constraint devices and interchangeably used herein) with a recent observational data obtained from one or more sensors from the plurality of sensors configured to the manufacturing system. In other words, training of the randomly assigned plurality of initial values of (i) the last layer and the penultimate layer or (ii) the last layer of the deep learning model happens (or may happen) in the at least one resource constraint device from the plurality of resource constraint devices using the with recent observational data obtained from one or more sensors from the plurality of sensors configured to the manufacturing system, in an embodiment of the present disclosure. The resource constraint device is associated with the manufacturing system to obtain an intermediatory trained deep learning model. It is to be noted that the intermediatory trained deep learning model will have no longer randomly assigned values. However, the randomly assigning of initial values is (or may be) based on the pattern of the one or more observations (or observational data – also referred as sensor data and interchangeably used herein) obtained from the plurality of sensors. Below Table 3 depicts the recent observational data based on which the randomly assigned plurality of initial values of (i) the last layer and the penultimate layer or (ii) the last layer of the deep learning model comprised in the manufacturing system are trained.
Table 3
force torque pow rpm vel uts fpr power GT_of_
product_quality
4312.057 8.455 2.47349 1383.1964 39.024 191.0587 0.028213 4123.308 0
8049.933 14.062 2.467242 990.8306 191.86 249.3077 0.193636 4112.892 0
4876.97 11.392 2.471188 998.8838 44.689 225.5982 0.044739 4119.47 0
4258.526 7.298 2.460666 1760.5524 57.907 197.9676 0.032891 4101.93 0
3828.188 6.586 2.470202 1773.9846 42.343 190.3269 0.023869 4117.827 0
4125.746 6.586 2.479737 2160.9863 60.654 194.1913 0.028068 4133.722 0
3291.386 5.874 2.469544 2560.3865 41.885 178.4046 0.016359 4116.73 1
4274.974 6.942 2.461652 1765.4681 40.398 190.3269 0.022882 4103.574 1
3845.233 6.23 2.469544 2551.2319 34.962 178.4046 0.013704 4116.73 1
5811.514 17.533 2.460008 599.4709 34.561 210.0918 0.057653 4100.833 1
1649.579 4.45 2.471517 2560.6694 45.032 178.4046 0.017586 4120.019 1
5279.795 7.565 2.460995 1767.179 59.681 197.9676 0.033772 4102.479 1
4728.939 7.387 2.458693 1766.8383 59.452 197.9676 0.033649 4098.641 1
3293.18 6.052 2.457378 2170.3088 41.485 185.7625 0.019115 4096.449 0

[039] Once randomly assigned values of the last layer and/or the last layer and the penultimate layers are trained, at step 210 of the method of the present disclosure, the one or more hardware processors 104 train all layers of the intermediatory trained deep learning model at the at least one resource constraint device from the plurality of resource constraint devices using the recent observational data obtained from the at least one sensor configured to the manufacturing system. In other words, training of all layers of the intermediatory trained deep learning model happens (or may happen) in the at least one resource constraint device from the plurality of resource constraint devices using the with recent observational data obtained from one or more sensors from the plurality of sensors configured to the manufacturing system, in an embodiment of the present disclosure. The training is performed using the recent observational data depicted in Table 3. Once the DL/ML model is trained, it can be used for any application (e.g., prediction of, classification of data such as values, images, and the like).
[040] As mentioned earlier, conventional approached have implemented deep learning (DL) model that is trained on old versions of data. This may work as intended in the initially systems during the implementation but as system configuration / product version changes the DL model accuracy drifts. The monitoring team must wait for a certain amount of drift in accuracy and retrain the DL model from scratch. The main reason of the wait time to retrain is cost of training in terms of time and computing power and lack of sufficient data for the new configuration. Embodiments of the present disclosure provide systems and methods that reuse most of the parameter values that were learnt using the last version of data. Only the parameters near to the output layer of the DL model are trained. Since a very small subset of original parameters are trained, lesser computing power and time is required and most importantly lesser training data. Thus, any monitoring activity can afford to retrain the DL model much more frequently resulting into consistent accurate score (or prediction and/or classification) as applicable.
[041] The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims.
[042] It is to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g., any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g., hardware means like e.g., an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g., an ASIC and an FPGA, or at least one microprocessor and at least one memory with software processing components located therein. Thus, the means can include both hardware means and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software means. Alternatively, the embodiments may be implemented on different hardware devices, e.g., using a plurality of CPUs.
[043] The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various components described herein may be implemented in other components or combinations of other components. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
[044] The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
[045] Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
[046] It is intended that the disclosure and examples be considered as exemplary only, with a true scope of disclosed embodiments being indicated by the following claims.
, Claims:We Claim:
1. A processor implemented method, comprising:
receiving, via one or more hardware processors, an input change during at least one of (i) an inspection in at least one assembly line, and (ii) a postproduction machine condition monitoring of a manufacturing system (202), wherein the manufacturing system is configured with a plurality of resource constraint devices and a plurality of sensors;
observing, via the one or more hardware processors, a change in one or more observations obtained from the plurality of sensors configured to the manufacturing system based on the input change during at least one of (i) the inspection in at least one assembly line, and (ii) the postproduction machine condition monitoring of the manufacturing system (204);
resetting, via the one or more hardware processors, (i) a last layer or (ii) the last layer and a penultimate layer of a deep learning model comprised in the manufacturing system by randomly assigning a plurality of initial values to (i) the last layer and the penultimate layer, or (ii) the last layer of the deep learning model comprised in the manufacturing system based on the change in the one or more observations obtained from the plurality of sensors (206);
training, via the one or more hardware processors, the randomly assigned plurality of initial values of (i) the last layer and the penultimate layer, or (ii) the last layer of the deep learning model comprised in the manufacturing system in at least one resource constraint device from the plurality of resource constraint devices with a recent observational data obtained from one or more sensors from the plurality of sensors configured to the manufacturing system (208), wherein the resource constraint device is associated with the manufacturing system to obtain an intermediatory trained deep learning model; and
training, via the one or more hardware processors, all layers of the intermediatory trained deep learning model at the at least one resource constraint device from the plurality of resource constraint devices using the recent observational data obtained from the at least one sensor configured to the manufacturing system (210).

2. The processor implemented method as claimed in claim 1, wherein the input change during the inspection in the at least one assembly line comprises at least one of (i) a change in a configuration of a plurality of resource constraint devices of the manufacturing system, (ii) a change in an assembling process with no change to one or more machine parts, (iii) a change in a configuration of the plurality of resource constraint devices that have an impact on the assembling process, and (iv) a change in an assembling of the one or more machine parts.

3. The processor implemented method as claimed in claim 1, wherein the input change during postproduction machine condition monitoring comprises at least one of (i) a version upgrade in a manufacturing process of the manufacturing system, and (ii) a usage pattern of one or more specific machine parts.

4. The processor implemented method as claimed in claim 1, wherein the plurality of resource constraint devices is configured to monitor (i) one or more machine parts of a product, and (ii) an assembling process during manufacturing of a product.

5. The processor implemented method as claimed in claim 1, wherein the deep learning model is trained with a historical data and configured in the manufacturing system with a pre-defined accuracy level.

6. A system (100), comprising:
a memory (102) storing instructions;
one or more communication interfaces (106); and
one or more hardware processors (104) coupled to the memory (102) via the one or more communication interfaces (106), wherein the one or more hardware processors (104) are configured by the instructions to:
receive an input change during at least one of (i) an inspection in at least one assembly line, and (ii) a postproduction machine condition monitoring of a manufacturing system, wherein the manufacturing system is configured with a plurality of resource constraint devices and a plurality of sensors;
observe a change in one or more observations obtained from the plurality of sensors configured to the manufacturing system based on the input change during at least one of (i) the inspection in at least one assembly line, and (ii) the postproduction machine condition monitoring of the manufacturing system;
reset (i) a last layer or (ii) the last layer and a penultimate layer of a deep learning model comprised in the manufacturing system by randomly assigning a plurality of initial values to (i) the last layer and the penultimate layer or (ii) the last layer of the deep learning model comprised in the manufacturing system based on the change in the one or more observations obtained from the plurality of sensors;
train the randomly assigned plurality of initial values of (i) the last layer and the penultimate layer, or (ii) the last layer of the deep learning model comprised in the manufacturing system in at least one resource constraint device from the plurality of resource constraint devices with a recent observational data obtained from one or more sensors from the plurality of sensors configured to the manufacturing system, wherein the resource constraint device is associated with the manufacturing system to obtain an intermediatory trained deep learning model; and
train all layers of the intermediatory trained deep learning model at the at least one resource constraint device from the plurality of resource constraint devices using the recent observational data obtained from the at least one sensor configured to the manufacturing system.

7. The system as claimed in claim 6, wherein the input change during the inspection in the at least one assembly line comprises at least one of (i) a change in a configuration of a plurality of resource constraint devices of the manufacturing system, (ii) a change in an assembling process with no change to one or more machine parts, (iii) a change in a configuration of the plurality of resource constraint devices that have an impact on the assembling process, and (iv) a change in an assembling of the one or more machine parts.

8. The system as claimed in claim 6, wherein the input change during postproduction machine condition monitoring comprises at least one of (i) a version upgrade in a manufacturing process of the manufacturing system, and (ii) a usage pattern of one or more specific machine parts.

9. The system as claimed in claim 6, wherein the plurality of resource constraint devices is configured to monitor (i) one or more machine parts of a product, and (ii) an assembling process during manufacturing of a product.

10. The system as claimed in claim 6, wherein the deep learning model is trained with a historical data and configured in the manufacturing system with a pre-defined accuracy level.

Dated this 26th Day of September 2022

Tata Consultancy Services Limited
By their Agent & Attorney

(Adheesh Nargolkar)
of Khaitan & Co
Reg No IN-PA-1086

Documents

Application Documents

# Name Date
1 202221055071-STATEMENT OF UNDERTAKING (FORM 3) [26-09-2022(online)].pdf 2022-09-26
2 202221055071-REQUEST FOR EXAMINATION (FORM-18) [26-09-2022(online)].pdf 2022-09-26
3 202221055071-FORM 18 [26-09-2022(online)].pdf 2022-09-26
4 202221055071-FORM 1 [26-09-2022(online)].pdf 2022-09-26
5 202221055071-FIGURE OF ABSTRACT [26-09-2022(online)].pdf 2022-09-26
6 202221055071-DRAWINGS [26-09-2022(online)].pdf 2022-09-26
7 202221055071-DECLARATION OF INVENTORSHIP (FORM 5) [26-09-2022(online)].pdf 2022-09-26
8 202221055071-COMPLETE SPECIFICATION [26-09-2022(online)].pdf 2022-09-26
9 202221055071-FORM-26 [29-11-2022(online)].pdf 2022-11-29
10 Abstract1.jpg 2022-12-05
11 202221055071-Proof of Right [28-12-2022(online)].pdf 2022-12-28