Abstract: The present invention discloses a system and method for standardizing images and detecting defects in vehicle body parts. The method includes capturing real time image of vehicle (108). The method includes processing real time image of the vehicle (108) to generate standardized real time image of the vehicle (108). The method includes identifying body parts of the vehicle (108) in the standardized real time image. Furthermore, the method includes determining type of vehicle part surface of the standardized real time image. Further, the method includes generating an artificial intelligence-based defect model for the identified body parts of the vehicle (108) based on the type of vehicle part surface. Furthermore, the method includes determining defect associated with the identified body parts of the vehicle (108) based on the generated artificial intelligence-based defect model. Also, the method includes outputting the determined defect associated with the body parts of the vehicle (108).
Claims:We Claim:
1. A system (100a, 100b) for standardizing images and detecting defects in a vehicle body part, the system (100a, 100b) comprising:
one or more hardware processors (202); and
a memory (204) coupled to the one or more hardware processors (202), wherein the memory (204) comprises a plurality of subsystems (110) in the form of programmable instructions executable by the one or more hardware processors (202), wherein the plurality of subsystems (110) comprises:
an image capturing subsystem (210) configured for capturing a real time image of a vehicle (108) from an image capturing device (112a, 112b);
an image processing subsystem (212) configured for processing the captured real time image of the vehicle (108) to generate a standardized real time image of the vehicle (108);
a body part identification subsystem (214) configured for identifying one or more body parts of the vehicle (10) in the standardized real time image of the vehicle (108);
a defect model generation subsystem (216) configured for generating an artificial intelligence-based defect model for the identified one or more body parts of the vehicle (108);
a defect determination subsystem (218) configured for determining a defect associated with the identified one or more body parts of the vehicle (108) based on the generated artificial intelligence-based defect model; and
an output subsystem (220) configured for outputting the determined defect associated with the identified one or more body parts of the vehicle (108) on a user interface of the user device (106a).
2. The system (100a, 100b) as claimed in claim 1, wherein in processing the captured real time image of the vehicle (108) to generate the standardized real time image of the vehicle (108), the image processing subsystem (212) is configured for:
determining whether the vehicle (108) captured in the real time image is fully in frame in real time;
determining quality of the real time image using a set of image quality parameters if the vehicle (108) captured in the real time image is fully in frame in real time; and
generating the standardized real time image of the vehicle (108) based on the determined quality of the real time image, wherein the standardized real time image complies with the set of image quality parameters.
3. The system (100a, 100b) as claimed in claim 2, wherein in determining whether the vehicle (108) captured in the real time image is fully in frame in real time, the image processing subsystem (212) is configured for:
capturing an updated real time image of the vehicle (108) from the image capturing device (112a, 112b) if the vehicle (108) captured in the real time image is partially in frame in real time and if the real time image fails to meet the set of quality parameters, wherein the updated real time image of the vehicle (108) is fully in frame in the real time.
4. The system (100a, 100b) as claimed in claim 1, wherein in identifying the one or more body parts of the vehicle (108) in the standardized real time image of the vehicle (108), the body part identification subsystem (214) is configured for:
classifying the standardized real time image of the vehicle (108) into one or more segments based on the one or more body parts of the vehicle using neural network technique;
determining confidence scores for one or more body parts of the vehicle (108) based on object detection mechanism;
determining location coordinates of the one or more body parts of the vehicle (108) based on bounding box technique; and
assigning a label to the one or more body parts of the vehicle (108) based on the determined location coordinates using trained machine learning models in real-time.
5. The system (100a, 100b) as claimed in claim 1, wherein in generating the artificial intelligence-based defect model for the identified one or more body parts of the vehicle (108), the defect model generation subsystem (216) is configured for:
determining whether type of vehicle part surface of standardized images are of a uniform type of vehicle part surface;
segmenting the images, comprising the one or more body parts, from standardized images of the vehicle (108) using a vehicle body part segmentation model if the type of vehicle part surface of the standardized images are of uniform type of vehicle part surface;
generating a set of cropped images of the segmented images, wherein each cropped image comprises sub parts of the one or more body part; and
generating the artificial intelligence-based defect model using each of the generated set of cropped images and a non-cropped image based on one or more rules, wherein the artificial intelligence-based defect model comprises one or more defect scores, defect masks, location coordinates of the sub-parts having the defect, type of the defect, and wherein the defect scores and defect masks indicates the sub-parts of the one or more body part having a defect.
6. The system (100a, 100b) as claimed in claim 1, wherein in generating the artificial intelligence-based defect model for the identified one or more body parts of the vehicle (108), the defect model generation subsystem (216) is configured for:
determining whether type of vehicle part surface of standardized images are of a non-uniform type of vehicle part surface using one or more prestored rules;
determining centre point and axis of symmetry for the one or more body part of the vehicle (108) with respect to one or more reference points within the standardized images if the type of vehicle part surface of the standardized image is a non-uniform type of vehicle part surface;
segmenting the bounding box image comprising the one or more body parts into two or more regions at the determined centre point and across the determined axis of symmetry; and
generating the artificial intelligence-based defect model wherein the artificial intelligence-based defect model comprises one or more defect scores, defect masks, location coordinates of the sub-parts having the defect, and , and wherein the defect scores and defect masks indicates the sub-parts of the one or more body part having a defect.
7. The system (100a, 100b) as claimed in claim 1, wherein in determining the defect associated with the identified one or more body parts of the vehicle (108) based on the generated artificial intelligence-based defect model, the defect determination subsystem (218) is configured for:
determining whether defect score of the one or more body parts of the vehicle (108) is less than a threshold value; and
determining the defect associated with the identified one or more body parts of the vehicle (108) if the defect score of the one or more body parts of the vehicle (108) is more than a threshold value.
8. The system (100a, 100b) as claimed in claim 1, wherein in outputting the determined defect associated with the identified one or more body parts of the vehicle (108) on the user interface of the user device (106a), the output subsystem (220) is configured for:
identifying location coordinates of defect area and type of defect associated with the one or more body parts based on the determined defect, wherein the defect area comprises segmented image of such body part having the defect;
superimposing the defect area on the standardized real time image of the vehicle (108) based on the identified location coordinates; and
outputting the superimposed image of the vehicle (108) indicating the determined defect on the user interface of the user device (106a).
9. A method (300) for standardizing images and detecting anomalies in a vehicle body part, the method (300) comprising:
capturing, by a processor (202), a real time image of a vehicle (108) from an image capturing device (112a, 112b);
processing, by the processor (202), the captured real time image of the vehicle (108) to generate a standardized real time image of the vehicle (108);
identifying, by the processor (202), one or more body parts of the vehicle (108) in the standardized real time image of the vehicle (108);
generating, by the processor (202), an artificial intelligence-based defect model for the identified one or more body parts of the vehicle (108);
determining, by the processor (202), a defect associated with the identified one or more body parts of the vehicle (108) based on the generated artificial intelligence-based defect model; and
outputting, by the processor (202), the determined defect associated with the identified one or more body parts of the vehicle (108) on a user interface of the user device (106a).
10. The method (300) as claimed in claim 9, wherein processing the captured real time image of the vehicle (108) to generate the standardized real time image of the vehicle (108) comprises:
determining whether the vehicle (108) captured in the real time image is fully in frame in real time;
determining quality of the real time image using a set of image quality parameters if the vehicle (108) captured in the real time image is fully in frame in real time; and
generating the standardized real time image of the vehicle (108) based on the determined quality of the real time image, wherein the standardized real time image complies with the set of image quality parameters.
11. The method (300) as claimed in claim 10, wherein determining whether the vehicle (108) captured in the real time image is fully in frame in real time comprises:
capturing an updated real time image of the vehicle (108) from the image capturing device (112a, 112b) if the vehicle (108) captured in the real time image is partially in frame in real time, wherein the updated real time image of the vehicle (108) is fully in frame in the real time.
12. The method (300) as claimed in claim 9, wherein identifying the one or more body parts of the vehicle (108) in the standardized real time image of the vehicle (108) comprises:
classifying the standardized real time image of the vehicle (108) into one or more segments based on the one or more body parts of the vehicle using neural network technique;
determining confidence scores for one or more body parts of the vehicle (108) based on object detection mechanism;
determining location coordinates of the one or more body parts of the vehicle (108) based on bounding box technique; and
assigning a label to the one or more body parts of the vehicle (108) based on the determined location coordinates using trained machine learning models in real-time.
13. The method (300) as claimed in claim 9, wherein generating the artificial intelligence-based defect model for the identified one or more body parts of the vehicle (108) comprises:
determining whether type of vehicle part surface of standardized images are of a uniform type of vehicle part surface using one or more image processing technique;
segmenting the image, comprising the one or more body parts, from the standardized images of the vehicle (108) using a vehicle body part segmentation model if the type of vehicle part surface of the standardized images are of uniform type of vehicle part surface;
generating a set of cropped images of the segmented image, wherein each cropped image comprises sub parts of the one or more body part; and
generating the artificial intelligence-based defect model using the generated set of cropped images and a non-cropped image based on one or more based rules, wherein the artificial intelligence-based defect model comprises one or more defect scores, defect masks, location coordinates of the sub-parts having the defect, type of the defect , and wherein the defect scores and defect masks indicates the sub-parts of the one or more body part having a defect.
14. The method (300) as claimed in claim 9, wherein generating the artificial intelligence-based defect model for the identified one or more body parts of the vehicle (108) comprises:
determining whether type of vehicle part surface of standardized images are of a non-uniform type of vehicle part surface using one or more image processing technique;
determining centre point and axis of symmetry for the one or more body part of the vehicle (108) with respect to one or more reference points within the standardized images if the type of vehicle part surface of the standardized images are of a non-uniform type of vehicle part surface;
segmenting the image comprising the one or more body parts into two or more regions at the determined centre point and across the determined axis of symmetry; and
generating the artificial intelligence-based defect model wherein the artificial intelligence-based defect model comprises one or more defect scores, defect masks, location coordinates of the sub-parts having the defect, and wherein the defect scores and defect masks indicates the sub-parts of the one or more body part having a defect.
15. The method (300) as claimed in claim 9, wherein determining the defect associated with the identified one or more body parts of the vehicle (108) based on the generated artificial intelligence-based defect model comprises:
determining whether defect score of the one or more body parts of the vehicle (108) is more than a threshold value; and
determining the defect associated with the identified one or more body parts of the vehicle (108) if the defect score of the one or more body parts of the vehicle (108) is more than a threshold value.
16. The method (300) as claimed in claim 9, wherein outputting the determined defect associated with the identified one or more body parts of the vehicle (108) on the user interface of the user device (106a) comprises:
identifying location coordinates of defect area and type of defect associated with the one or more body parts based on the determined defect, wherein the defect area comprises segmented image of such body part having the defect;
superimposing the defect area on the standardized real time image of the vehicle (108) based on the identified location coordinates; and
outputting the superimposed image of the vehicle (108) indicating the determined defect on the user interface of the user device (106a).
Description:
TECHNICAL FIELD
[1] The present subject matter relates generally to defect detection systems. More particularly, but not exclusively discloses a system and a method for standardizing images and detecting defects in vehicle body parts using artificial intelligence and machine learning based techniques.
BACKGROUND
[2] Typically, in cases of damage or defect in automobiles (for example., from accidents, from weather events such as hailstorms and the like, from normal wear and tear, and the like.), most identification and analysis of the damage is performed by human inspectors or appraisers. Generally, the inspectors will document any previous damage, perform quantifying steps such as counting and classifying current damaged areas (e.g., dents and scratches), and then calculate an insurance claim based on this information. Inspections are thus very subjective (i.e., based on the particular inspector or other factors such as the environment in which the damage is viewed), and prone to inconsistency and wide variations. Additionally, the inspection, appraisal and repair process involves numerous unrelated entities including estimators, insurers, parts suppliers, repair shops, rental agents, regulatory authorities, and of course, the vehicle owner. The result is a lengthy, complex and expensive process requiring participation and coordination of numerous entities.
[3] Conventional approaches fail to handle both uniform and non-uniform type vehicle surfaces for defect Analysis. This results in limitations on computational methods existing which fail to detect any defect associated with unsupported vehicle surfaces. Further, most of the existing systems fail to provide a real time end to end defect analysis system for a vehicle which does not require user intervention at any stage of defect detection.
[4] Moreover, such existing systems fail to assess quality of images captured for defect analysis leading to inaccurate and false detection of defects
[5] Therefore, there exists a need for an improved method and system for standardizing images and detecting defects in vehicle body parts irrespective of the vehicular surfaces and which is more accurate in determining the defects of the vehicle.
SUMMARY
[6] One or more shortcomings of the prior art may be overcome, and additional advantages may be provided through the present disclosure. Additional features and advantages may be realized through the techniques of the present disclosure. Other embodiments and aspects of the disclosure are described in detail herein and are considered a part of the claimed disclosure.
[7] Disclosed herein is a system for standardizing images and detecting defects in a vehicle body part. The system includes a hardware processor; and a memory coupled to the processor. The memory includes a set of program instructions in the form of a plurality of subsystems, configured to be executed by the processor. The plurality of subsystems includes an image capturing subsystem configured for capturing a real time image of a vehicle from an image capturing device. The plurality of subsystems further includes an image processing subsystem configured for processing the captured real time image of the vehicle to generate a standardized real time image of the vehicle. Further, the plurality of subsystems includes a body part identification subsystem configured for identifying one or more body parts of the vehicle in the standardized real time image of the vehicle. Further, the plurality of subsystems includes defect model generation subsystem configured for generating an artificial intelligence-based defect model for the identified one or more body parts of the vehicle. Furthermore, the plurality of subsystems includes a defect determination subsystem configured for determining a defect associated with the identified one or more body parts of the vehicle based on the generated artificial intelligence-based defect model. Additionally, the plurality of subsystems includes an output subsystem configured for outputting the determined defect associated with the identified one or more body parts of the vehicle on a user interface of the user device.
[8] Further, the present disclosure includes a method for standardizing images and detecting defects in a vehicle body part. The method includes capturing a real time image of a vehicle from an image capturing device. Further, the method includes processing the captured real time image of the vehicle to generate a standardized real time image of the vehicle. The method includes identifying one or more body parts of the vehicle in the standardized real time image of the vehicle. Furthermore, the method includes determining type of vehicle part surface of the standardized real time image of the vehicle using one or more image processing technique. The type of vehicle part surface comprises a uniform type and non-uniform type. Further, the method includes generating an artificial intelligence-based defect model for the identified one or more body parts of the vehicle based on the determined type of vehicle part surface. Furthermore, the method includes determining a defect associated with the identified one or more body parts of the vehicle based on the generated artificial intelligence-based defect model. Also, the method includes outputting, by the processor, the determined defect associated with the identified one or more body parts of the vehicle on a user interface of the user device.
[9] The foregoing summary is illustrative only and is not intended to be in any way limiting. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features will become apparent by reference to the drawings and the following detailed description.
BRIEF DESCRIPTION OF THE ACCOMPANYING DIAGRAMS
[10] The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the figures to reference like features and components. Some embodiments of system and/or methods in accordance with embodiments of the present subject matter are now described, by way of example only, and with reference to the accompanying figures, in which:
[11] FIG 1A is a block diagram is a block diagram of a cloud computing environment capable of standardizing images and detecting defects in a vehicle body part, according to an embodiment of the present invention;
[12] FIG 1B is a block diagram of an edge computing environment capable of standardizing images and detecting defects in a vehicle body part, according to an embodiment of the present invention;
[13] FIG 2 is a block diagram of a computing system, such as those shown in FIG 1 and FIG 2, capable of standardizing images and detecting defects in a vehicle body part, according to an embodiment of the present invention;
[14] FIG 3 is a process flowchart illustrating an exemplary method of standardizing images and detecting defects in a vehicle body part, according to an embodiment of the present invention;
[15] FIG 4A-4B is a screenshot of a real time image captured by the image capturing device, according to an embodiment of the present invention;
[16] FIG 5 is a process flowchart illustrating real time process execution flow in an edge device, according to an embodiment of the present invention;
[17] FIG 6 is a schematic representation of artificial intelligence-based defect detection process, according to an embodiment of the present invention; and
[18] FIG 7A-C is a schematic representation of the process of generating an artificial intelligence-based defect model for identified one or more body parts of the vehicle, according to an embodiment of the present invention.
[19] It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative systems embodying the principles of the present subject matter. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudo code, and the like represent various processes which may be substantially represented in computer readable medium and executed by a computer or processor, whether or not such computer or processor is explicitly shown.
DETAILED DESCRIPTION
[20] In the present document, the word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any embodiment or implementation of the present subject matter described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
[21] While the disclosure is susceptible to various modifications and alternative forms, specific embodiment thereof has been shown by way of example in the drawings and will be described in detail below. It should be understood, however that it is not intended to limit the disclosure to the forms disclosed, but on the contrary, the disclosure is to cover all modifications, equivalents, and alternative falling within the scope of the disclosure.
[22] The terms “comprises”, “comprising”, “includes” or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a setup, device or method that includes a list of components or steps does not include only those components or steps but may include other components or steps not expressly listed or inherent to such setup or device or method. In other words, one or more elements in a system or apparatus proceeded by “comprises… a” does not, without more constraints, preclude the existence of other elements or additional elements in the system or method.
[23] Throughout the specification, the terms “subsystem” and “module” are used interchangeably.
[24] Throughout the specification the term ‘cloud computing system’ and ‘edge computing system’ may also be referred as ‘system’ and the ‘computing system’.
[25] A computer system (standalone, client or server computer system) configured by an application may constitute a “subsystem” that is configured and operated to perform certain operations. In one embodiment, the “subsystem” may be implemented mechanically or electronically, so a subsystem may comprise dedicated circuitry or logic that is permanently configured (within a special-purpose processor) to perform certain operations. In another embodiment, a “subsystem” may also comprise programmable logic or circuitry (as encompassed within a general-purpose processor or other programmable processor) that is temporarily configured by software to perform certain operations.
[26] Accordingly, the term “subsystem” should be understood to encompass a tangible entity, be that an entity that is physically constructed permanently configured (hardwired) or temporarily configured (programmed) to operate in a certain manner and/or to perform certain operations described herein.
[27] The present invention provides a system for standardizing images and detecting defects in a vehicle body part. The present system uses artificial intelligence-based techniques for vehicle part detection, standardizing images such as detecting whether images are in frame or are too far or are too close, user movement guidance, frame or image quality validation and defect detection on vehicle body without human intervention. The present system is able to detect defects on both uniform and non-uniform vehicle part surfaces. This results in capturing standardized(ideal) vehicle parts capturing and determination of defects.
[28] Referring now to the drawings, and more particularly to FIGs. 1 through 7A-C, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method.
[29] FIG 1A is a block diagram of a cloud computing environment 100 capable of standardizing images and detecting defects in a vehicle body part, according to an embodiment of the present invention. According to FIG. 1A, the cloud computing environment 100a comprises a cloud computing system 102a which is capable of delivering cloud applications to user device 106a via a network 104a. The cloud computing system 102 is also connected to a vehicle 108a via the network 104a. The vehicle 108a may be an automotive such as a car, jeep, or any other automotive vehicle. The vehicle 108a comprises one or more body parts such as doors, windows, hood, roof, windshield, front bumper, rear bumper, and the like. In one specific embodiment, the one or more communication networks, such as 104a may include, but not limited to, an internet connection, a wireless fidelity (WI-FI) and the like. Although, FIG 1 illustrates the cloud computing system 102a connected to one user device 106a and one vehicle 108a, one skilled in the art can envision that the cloud computing system 102 can be connected to several user devices located at different locations and several vehicles via the network 104a.
[30] The user devices 106a can be a laptop computer, desktop computer, tablet computer, smartphone and the like. The user device 106a can access cloud applications via a local web browser or via android or IOS applications. The user device 106a comprises an image capturing device 112a and an application programming interface (API) 114a. The image capturing device 112a may be a camera. The API 114a enables communication with the cloud computing system 102a.
[31] The cloud computing system 102a includes a cloud interface, cloud hardware and OS, a cloud computing platform, and a database. The cloud interface enables communication between the cloud computing platform and the user device 106a. Also, the cloud interface enables communication between the cloud computing platform and the vehicle 108a. The cloud hardware and OS may include one or more servers on which an operating system is installed and including one or more processing units, one or more storage devices for storing data and other peripherals required for providing cloud computing functionality. The cloud computing platform is a platform which implements functionalities such as data storage, data analysis, data processing, data communication on the cloud hardware and OS via APIs and algorithms and delivers the aforementioned cloud services. The cloud computing platform may include a combination of dedicated hardware and software built on top of the cloud hardware and OS. As used herein, "cloud computing environment" refers to a processing environment comprising configurable computing physical and logical assets, for example, networks, servers, storage, applications, services, etc., and data distributed over the cloud platform. The cloud computing environment 100a provides on-demand network access to a shared pool of the configurable computing physical and logical assets. The server may include one or more servers on which the OS is installed. The servers may comprise one or more processors, one or more storage devices, such as, memory units, for storing data and machine-readable instructions for example, applications and application programming interfaces (APIs), and other peripherals required for providing cloud computing functionality.
[32] The cloud computing system 102a comprises a plurality of subsystems 110a capable of vehicle alignment detection, user movement guidance, vehicle part detection, frame/image quality validation and defect detection on the vehicle body. The vehicle alignment detection includes detecting whether the image is in frame or too far or too close, and the like. A detailed view of the cloud computing system 102a and the plurality of subsystems 110a is provided in FIG. 2. In this case, the cloud computing system 102 chooses cloud production framework based on availability of hardware or resource constraints. This production framework ranges from Pytorch/TorchScript, Caffe2, Openvino and the like.
[33] Similarly, FIG 1B is a block diagram of an edge computing environment 100b capable of standardizing images and detecting defects in a vehicle body part, according to an embodiment of the present invention. In FIG 1B, instead of a cloud computing system, an edge computing system 102b is provided capable of detecting defects in the vehicle body parts. In this case, the edge computing system 102b may be a smart phone, personal computer, laptop, tablet, and the like. The edge computing system 102b comprises an image capturing device 112b and a plurality of subsystems 110b. The image capturing device 112b may be a camera embedded within the edge computing system 102b. Further, the plurality of subsystems 110b is similar to those shown in FIG 1A. A detailed view and explanation of the plurality of subsystems 110b is provided in FIG. 2. In this case, an artificial intelligence (AI) model size, hardware requirements, computation time and the like are optimized such that it is easily deployable on a mobile device, tablets or the like. For example, the AI model trained using Pytorch Framework is converted to TorchScript format for edge deployment. This process has many challenges. For example, torchscript conversion for SSD(single shot multibox detector) based models are difficult because SSD model has non-linear flow. In this case, the complete network flow for conversion is rewritten. Also, the SSD based models have extensive pre- and post-processing operations. For the processing required, the existing android code files shared by pytorch are updated and made compatible with present disclosure model. For the post-processing, post processing are written using different python libraries such as NumPy and the like and as Torchscript does not support these libraries, these post processing operations are rewritten in torchscript supported data types and operations. Thus, these post processing operations became a part of the model Torchscript file itself.
[34] Those of ordinary skilled in the art will appreciate that the hardware depicted in FIG 1A may vary for particular implementations. For example, other peripheral devices such as an optical disk drive and the like, Local Area Network (LAN), Wide Area Network (WAN), Wireless (e.g., Wi-Fi) adapter, graphics adapter, disk controller, input/output (I/O) adapter also may be used in addition or in place of the hardware depicted. The depicted example is provided for the purpose of explanation only and is not meant to imply architectural limitations with respect to the present disclosure.
[35] Those skilled in the art will recognize that, for simplicity and clarity, the full structure and operation of all data processing systems suitable for use with the present disclosure is not being depicted or described herein. Instead, only so much of a cloud computing system 102 as is unique to the present disclosure or necessary for an understanding of the present disclosure is depicted and described. The remainder of the construction and operation of the cloud computing system 102 may conform to any of the various current implementation and practices known in the art.
[36] FIG 2 is a block diagram of a computing system, such as those shown in FIG 1 and FIG 2, capable of standardizing images and detecting defects in a vehicle body part, according to an embodiment of the present invention. In FIG 2, the computing system 102 comprises a processor 202, a memory 204, and a database 206. The processor 202, the memory 204 and the database 206 are communicatively coupled through a system bus 208 or any similar mechanism. The memory 204 comprises plurality of subsystems 110 (such as those shown in FIG.1A-B) in the form of programmable instructions executable by the one or more hardware processors 202. The plurality of subsystems 110 further includes a standardization tool 222 and a defect determination tool 224. The standardization tool 222 comprises an image capturing subsystem 210, an image processing subsystem 212, and a body part identification subsystem 214. The defect determination tool 224 comprises, a defect model generation subsystem 216, and a defect determination subsystem 218. The plurality of subsystems 110 also comprises an output subsystem 220.
[37] The image capturing subsystem 210 is configured for capturing a real time image of a vehicle 108 from an image capturing device 112. For example, the real time image may be derived from a video captured by the image capturing device 112 or may be a direct picture captured by the image capturing device 112. Specifically, the frame of the image is extracted from frame by frame from a saved video file in the user device 106, or from a live video capture stream from the camera API of the user device 106 (where 24 frames are created per second and around 5-20 frames are processed per second, based on the edge device 102b compute capability (real time)), or from a still image captured from the camera API of the user device 106 (real time).
[38] The image processing subsystem 212 is configured for processing the captured real time image of the vehicle 108 to generate a standardized real time image of the vehicle 108. In processing the captured real time image of the vehicle 108 to generate the standardized real time image of the vehicle 108, the image processing subsystem 212 is configured for determining whether the vehicle (108) captured in the real time image is fully in frame in real time. Further, the image processing subsystem 212 is configured for determining quality of the real time image using a set of image quality parameters if the vehicle (108) captured in the real time image is fully in frame in real time. The set of image quality parameters comprises checking whether the image is blur or not, checking whether the images is a low light image, and checking whether the image has glare light/brightspot on it and the like. The quality of image is defined by the set of image quality parameters. Also, the image processing subsystem 212 is configured for generating the standardized real time image of the vehicle (108) based on the determined quality of the real time image. The standardized real time image complies with the set of image quality parameters. The in-frame image are those in which the image is properly aligned within the frame. In an embodiment, the quality parameter “image blur” is determined using a high pass filter and variance computation of the output to decide if the frame is blur or not. The quality parameter “Frame Lowlight Validation” is determined by determining average of pixel values for each channel then calculate the perceived brightness value ‘B’. If value of ‘B’ is less than a fixed threshold then the frame is a lowlight image. This is depicted in equation 1 below:
[39] B = math.sqrt(0.241*(R_mean**2) + 0.691*(G_mean**2) + 0.068*(B_mean**2)) ………………………..equation (1)
[40] Further, the quality parameter “Frame Glare Validation” is determined using an artificial intelligence-based model. If value of ‘B’ is greater than a fixed threshold then the frame has glare on it. In an embodiment, the Frame Flare Validation is performed using an artificial intelligence-based model.
[41] Alternatively, in determining whether the vehicle 108 captured in the real time image is fully in frame in real time, the image processing subsystem 212 is configured for capturing an updated real time image of the vehicle 108 from the image capturing device 112 if the vehicle 108 captured in the real time image is partially in frame in real time. The updated real time image of the vehicle 108 is fully in frame in the real time. For example, in determining whether the image is in frame, alignment of body part in the image is identified. If the alignment is not proper, then the image processing subsystem 212 suggests in which direction the user should move the image capturing device 112 to capture a best frame.
[42] The image processing subsystem 212 is an artificial intelligence (AI) system that helps user in capturing the properly aligned image (using the bounding box information calculated by the SSD network mentioned above), in terms of if the vehicle part/ parts are in frame or outframe. This is calculated by checking if the vehicle part bounding box goes out of the image boundary or not. In a case where vehicle part or parts are in too far or too close, this alignment is verified by calculating the area of the vehicle part bounding box, if the area is greater or less than maximum area, minimum area thresholds. This subsystem 212 also helps in assessing the direction the user should move in order to capture a properly aligned image of the vehicle 108.
[43] The body part identification subsystem 214 is configured for identifying one or more body parts of the vehicle 108 in the standardized real time image of the vehicle 108. This is achieved using object detector output method. This process of identification helps in determining if the vehicle body part is properly aligned or not. In identifying the one or more body parts of the vehicle 108 the standardized real time image of the vehicle 108, the body part identification subsystem 214 is configured for classifying the standardized real time image of the vehicle 108 into one or more segments based on the one or more body parts of the vehicle using neural network technique. Each segment may comprise one or more body parts of the vehicle 108. Further, the body part identification subsystem 214 is configured for determining the ideal image for each vehicle part/segment using a neural network model, which is an AI model. The AI model used here is a neural network. Because the AI model is trained with ideal body part images and hence a higher confidence score is given to most ideal images. Further, the body part identification subsystem 214 is configured for determining confidence scores for one or more body parts of the vehicle 108 based on object detection mechanism. Any known object detection mechanism may be used. For example, a confidence score is computed for each of the segmented images. The image having a highest score is determined to comprise the car body part.
[44] In an exemplary embodiment, for a particular vehicle part, the image capturing subsystem 210 captures multiple frames(images) such that the body part identification subsystem 214 scores each of these frames by analysing the various qualities of an ideal image(the image the end user prefers). This prior is imbibed into the defect model generation subsystem 214 by careful preparation of training data. A Trimmed version of multibranched/multi-headed Mobilenet-V2 SSD Lite Neural network is used in this body part identification subsystem 214. This network shares the backbone with other networks that are added later on. Thus, having a common backbone reduces the FLOPs(floating point operations per second) per image to a great extent.
[45] Further, the body part identification subsystem 214 is configured for determining location coordinates of the one or more body parts of the vehicle 108 based on bounding box technique. Furthermore, the body part identification subsystem 214 is configured for assigning a label to the one or more body parts of the vehicle 108 based on the determined location coordinates using trained machine learning models in real-time. For example, each identified body part is labelled as ‘bumper’, ‘windshield’ or ‘door’ or the like based on type of body part identified.
[46] The defect model generation subsystem 216 is configured for generating an artificial intelligence-based defect model for the identified one or more body parts of the vehicle 108. The artificial intelligence (AI) based defect model may be a neural network model. In generating the artificial intelligence-based defect model for the identified one or more body parts of the vehicle 108, the defect model generation subsystem 216 is configured for determining whether type of vehicle part surface of standardized images are of a uniform type of vehicle part surface. This standardized image may be a sample image pre-processed and prestored in the database. This standardized image amounts to training data. Further, the defect model generation subsystem 216 is configured for segmenting the image (such as a sample image), comprising the one or more body parts, from the standardized images of the vehicle 108 using a vehicle body part segmentation model if the type of vehicle part surface of the standardized images are of a uniform type of vehicle part surface. For example, if the type of vehicle part surface is a uniform type, then the image is segmented using vehicle body part segmentation model, which is an AI model. This may be achieved using Unet/FCN based network. Also, input and output tensors were fixed at 320/512 for segmentation images. In an embodiment, Adam optimizer is used for training. Also, loss function consists of combination of BCE and DICE loss. Now, there are segmented images of the vehicle 108. The loss function is an integral part of the neural network training. In simple terms, the loss function evaluates model’s performance in training and guides the model to update its parameters in the direction which improves the model performance. In this particular case, this loss function (loss function consists of a combination of BCE and DICE loss) guides the model to improve its segmentation performance. Further, the defect model generation subsystem 216 is configured for generating a set of cropped images of the segmented image. Each cropped image comprises sub parts of the one or more body part. In the segmented images, only those portions comprising the subparts of the main body parts are cropped out. Further, the defect model generation subsystem 216 is configured for generating the artificial intelligence-based defect model using each of the generated set of cropped images and a non-cropped image based on one or more pre stored based rules. The artificial intelligence-based defect model comprises one or more defect scores, defect masks, location coordinates of the sub-parts having the defect, type of the defect. The defect scores and defect masks indicates the sub-parts of the one or more body part having a defect. In an embodiment, defect masks generated using cropped images are brought back to the original form and placed on top of the defect mask generated on the non-cropped. The defect or no defect analysis is done on this combined mask. The image cropping helps in solving issues such as data scarcity, small size defects while the non -crop image helps in understanding spatial information of a vehicle part, (for example handle on the door is not a defect).
[47] In cases where the determined type of vehicle part surface is a non-uniform vehicle part surface, but has an axis of symmetry then, in generating the artificial intelligence-based defect model for the identified one or more body parts of the vehicle 108, the defect model generation subsystem 216 is configured for determining whether type of vehicle part surface of standardized images are of a non-uniform type of vehicle part surface using one or more prestored rules. Further, the defect model generation subsystem 216 is configured for determining centre point and the axis of symmetry for the one or more body part of the vehicle 108 with respect to one or more reference points within the standardized images if the type of vehicle part surface of the standardized images are of a non-uniform type of vehicle part surface. The one or more reference points may be a car logo. For example, the centre point and axis of symmetry on logo is determined. Further, centre point and axis of symmetry of hood is determined. Also, the centre point and axis of symmetry of windshield is determined. Using all these vertical lines, the best axis of symmetry to split the body part is determined. The L1 is the line equidistant from all the above lines. This is more clearly explained in FIGs 7A-C. Further, the defect model generation subsystem 216 is configured for segmenting the image comprising the one or more body parts into two or more regions at the determined centre point and across the determined axis of symmetry. This segmented regions are say X1 and X2. Further, the defect model generation subsystem 216 is configured for finding dissimilarity between the segmented regions X1 and X2 , since X1 and X2 are supposed to be symmetric, and any dissimilarity between these regions suggests the presence of defect. Further, the defect model generation subsystem 216 is configured for generating the artificial intelligence-based defect model based on one or more prestored rules. The artificial intelligence-based defect model comprises one or more defect scores, defect masks, location coordinates of the sub-parts having the defect The defect scores and defect masks indicates the sub-parts of the one or more body part having a defect. The input to the AI model are X1 and X2. The output of the AI model is the defect score and defect mask along with location coordinates of the sub-parts having the defect.
[48] The defect mask ‘y’ is determined using function F. This is computed using equation 2 below:
[49] Function F(X1,X2) = y…………………equation (2)
[50] The defect determination subsystem 218 is configured for determining a defect associated with the identified one or more body parts of the vehicle 108 based on the generated artificial intelligence-based defect model. The defect may be a damage, wear and tear of the body parts of the vehicle 108. In determining the defect associated with the identified one or more body parts of the vehicle 108 based on the generated artificial intelligence-based defect model, the defect determination subsystem 218 is configured for determining whether defect score of the one or more body parts of the vehicle 108 is more than a threshold value. The threshold value is predefined. The defect score is computed using the neural network. If the defect score of the one or more body parts of the vehicle 108 is more than a threshold value, then the defect determination subsystem 218 is configured for determining the defect associated with the identified one or more body parts of the vehicle 108. For example, if defect score is more than the threshold value, then the defect is found.
[51] In an exemplary embodiment, the frame that is validated by the previous subsystems (till 216) is then processed by defect determination subsystem 218. The defect determination subsystem 218 is also an AI system. This AI system segments the vehicle parts from the frame and checks for defects on them. The defect detection process varies for vehicle parts with uniform surface (such as for example, hood, windshield and the like) and non-uniform surface (such as for example, front bumper, rear bumper and the like). For the vehicle 108 with the uniform surface, both segmentation and defect detection are done using unet based networks. The networks are optimized for low memory access latency, minimum FLOPs and other model optimization techniques. The segmented vehicle parts are cropped and checked for defects using the defect determination subsystem 218, which is generated by defect generation subsystem 216. The defect determination subsystem 218 then outputs the types of defects, coordinates of the defect on the vehicle part.
[52] The output subsystem 220 is configured for outputting the determined defect associated with the identified one or more body parts of the vehicle 108 on a user interface of the user device 106. In outputting the determined defect associated with the identified one or more body parts of the vehicle 108 on the user interface of the user device 106, the output subsystem 220 is configured for identifying location coordinates of defect area and type of defect associated with the one or more body parts based on the determined defect. The defect area comprises segmented image of such body part having the defect. The type of defect may be repairable defect, unrepairable defect, replacement required, maintenance required and the like. Further, the output subsystem 220 is configured for superimposing the defect area on the standardized real time image of the vehicle 108 based on the identified location coordinates. The defect area is overlayed onto the original standard image as captured by the image capturing device 112. Before overlaying, all the segmented or cropped images are concatenated into one single image having the defect area and the location coordinates of the defect and other information. This concatenated image is then overlayed onto the original standard image as captured by the image capturing device 112. Furthermore, the output subsystem 220 is configured for outputting the superimposed image of the vehicle 108 indicating the determined defect on the user interface of the user device 106.
[53] The database 206 is configured to store all information related to the vehicle 108, and the user device 106. For example, the database 206 is configured to store captured real time images of the vehicle 108, the vehicle body part segmentation model, updated real time image of the vehicle 108, the artificial intelligence-based defect model, identified body parts, segmented and cropped images, location coordinates, defect determined, type of defect, defect score, defect mask, root cause of defect, bounding box image and the like.
[54] FIG 3 is a process flowchart illustrating an exemplary method 300 of standardizing images and detecting defects in a vehicle body part, according to an embodiment of the present invention. At step 302, a real time image of a vehicle 108 is captured from an image capturing device 112. At step 304, the captured real time image of the vehicle 108 is processed to generate a standardized real time image of the vehicle 108. At step 306, one or more body parts of the vehicle 108 are identified in the standardized real time image of the vehicle 108. At step 308, an artificial intelligence-based defect model for the identified one or more body parts of the vehicle 108 is generated. At step 310, a defect associated with the identified one or more body parts of the vehicle 108 is determined based on the generated artificial intelligence-based defect model. At step 312, the determined defect associated with the identified one or more body parts of the vehicle 108 is outputted on a user interface of the user device 106.
[55] The method 300 can be implemented in any suitable hardware, software, firmware, or combination thereof.
[56] FIG 4A-4B is a screenshot of a real time image captured by the image capturing device, according to an embodiment of the present invention. FIG 4A represents an image of the vehicle 108 which is fully in frame. That is, the captured real time image of the vehicle 108 is fully in frame and hence does not require recapturing the image of the vehicle 108. This is determined during the step of processing the captured real time image of the vehicle 108 to generate the standardized real time image of the vehicle 108. The system 100 gives feedback to user via a text coloration in left top position (such as Red or Green), a box coloration around the vehicle 108 (such as Red or Green), a text coloration in bottom centre (such as Red or Green) and user movement direction arrows, (Not shown in the Fig 4A). If a frame passes all validations then the capture button is activated, and the user can capture the validated image (or configure the system 100 to automatically capture the validated image).
[57] FIG 4B represents an image of the vehicle 108 which is partially in frame. That is, the captured real time image of the vehicle 108 is out of the frame or partially in frame and hence requires recapturing the image of the vehicle 108. In this case, the image processing subsystem 212 recommends capturing an updated real time image of the vehicle 108 from the image capturing device 112. The updated real time image of the vehicle 108 is fully in frame in the real time. For example, the image processing subsystem 212 suggests direction in which the user has to move the camera, the distance between car and the camera in order to capture an image which is fully in frame.
[58] FIG 5 is a process flowchart illustrating real time process execution flow in an edge device 102b, according to an embodiment of the present invention. At step 502, a real time frame of the vehicle 108 is obtained from the camera. Further, the system 100b updates latest frame of the vehicle 108 captured from the camera onto a test device screen, which is the computing device screen. At step 504, it is checked if model inference thread is free or not. If the model thread is free then, at step 506, a model inference is performed, and it is determined if detection of the vehicle 108 is found. The model inference thread is a thread that handles model inference in the user device. If detection is found, then at step 508, then the frame is postprocessed and the vehicle’s parts names and inference confidences are obtained. If there is no detection then at step 512, the frame is ignored. In case the model thread is not free, then again at step 512, the frame is ignored. At step 510, it is checked if the vehicle part is in frame or not. If the vehicle part is in frame then at step 514, it is checked whether if the vehicle 108 is too close to the test device, for example, 102b. Further, at step 516, it is also checked if the vehicle 108 is too far from the test device, for example, 102b. at step 518, the result of these three checks at 510, 514 and 516 are used to compute AI guided movement assistance information to the user. This information is updated on the user interface for the user such that the user is allowed to adjust the camera in order to successfully capture the vehicle 108. In case, if the result of these three checks 510, 514 and 516 are negative, then at step 512, the frame is ignored. At step 520, it is checked if the captured image is blur or not ( also referred as ‘Blur Validation’). Further, at step 522, it is checked if the image is having enough light or not (also referred as ‘Low Light Validation’). At step 524, it is checked if the image has Glare in the image (also referred as ‘Glare Validation’). At step 526, it is checked if frame passed all validations performed in steps 520, 522 and 524. If yes, then the loop A is performed, that is the frame is passed to the defect detection process. The result of validation of steps 520, 522 and 524 are updated on the user interface screen.
[59] At step 528, vehicle part segmentation is performed using a vehicle part segmentation model. At step 530, the output of the segmentation step is post processed. At step 532, it is checked if the vehicle body part surface type is uniform. If not, then at step 534, defect detection model inference for non-uniform surface part is performed. If yes, then at step 538, defect detection model inference for uniform surface part is performed. Later, at step 536, it is determined whether defect is detected or not. The result of the determination is passed on to loop B, which is displayed on the user interface screen. For example, uniform surface part may be uniform surface with four doors and four windows along with hood and windshield. The non-uniform surface part with symmetry along vertical axis may be rear bumper and front bumper. Based on the type of vehicle part surface, separate defect models for different surface types (uniform vs non uniform with symmetry) are performed. Thus, defect information is computed and passed to the user by updating the test device user interface screen.
[60] In an exemplary embodiment, a user starts walking around the vehicle 108 to be inspected. For each frame the present system 100 gives results to the user in real time. At first stage, object detection and confidence score computation is performed. In this, the vehicle part is identified with a confidence score. For example, the vehicle part may be front door passenger side and the confidence score may be 98%. At second stage, vehicle alignment status is determined. For example, it is checked if the vehicle body part is in frame or not, is it centre aligned, is it too far, is too close, movement direction arrows and the like. At third stage, image quality validation is performed. In this, it is checked if the frame/image has any quality issues such as blur, low light, glare and the like. For blurriness, both vehicle part and surrounding are checked for blur. For low light, vehicle surroundings are checked for lowlight. For glare, vehicle part is checked for glare bright spots. At fourth stage, defect detection is performed. If all the validations at the image quality validation stage is successful, then defect detection is performed. In other words, the defect detection is performed on an image that passes all the above three validations. For each frame/image, the present system 100 validates all the above-mentioned validations and gives feedback to the user to capture a proper frame. In defect detection process, captured frame or image, segmented bounding box image and vehicle part name are fed to a vehicle part segmentation model. The output of this model is a segmented vehicle part. At this point, uniform and non-uniform vehicle part surfaces are defined. For example, a uniform vehicle part surface may be set of four doors, set of four windows, set of two windshields and the like. The non-uniform vehicle part surface with symmetry may be front bumpers, rear bumpers and the like.
[61] During the defect detection process, if the vehicle part surface is a uniform part surface, then an artificial intelligence-based defect detection model specific to uniform surface is applied. The output of such model is defect type, defect coordinates, and defect confidence. On the other hand, if the vehicle part surface is a non-uniform part surface, then an artificial intelligence-based defect detection model specific to non-uniform surface is applied. The output of such model is defect type, defect coordinates, and defect confidence.
[62] FIG 6 is a schematic representation of artificial intelligence-based defect detection process 600, according to an embodiment of the present invention. This method 600 corresponds to defect detection process for a uniform type of vehicle part surface. At step 602, a real time image of the vehicle 108 is captured. At step 604a and 604b, the body parts of the vehicle 108 are identified and accordingly, the captured image are split. For example, at step 604a, the captured image is split into one image comprising the body part window segmentation. Similarly, at step 604b, the captured image is split into another image comprising the body part door segmentation. At step 606, the image is segmented using a vehicle body part segmentation model. At step 608, a set of cropped images are generated. Each cropped image comprises sub parts of the one or more body part, which is window in this case. At step 610, these cropped images as well as the split image 604a are fed into the defect model generation subsystem 216 for applying a prestored artificial intelligence-based defect model. At step 612, a defect associated with the identified one or more body parts of the vehicle 108 is determined based on the artificial intelligence-based defect model. At step 614, the determined defect associated with the identified one or more body parts of the vehicle 108 is outputted as an overlayed image. Before outputting, the cropped images are concatenated (along with non-cropped mask) and analysis is done to create the final defect mask to display the overall image with body part having defect. Similarly, in case of the door segmentation, after step 604b, and at step 616, the image is segmented using a vehicle body part segmentation model. At step 618, a set of cropped images are generated. Each cropped image comprises sub parts of the one or more body part, which is door in this case. At step 620, these cropped images as well as the split image 604b are fed into the defect model generation subsystem 216 for applying the prestored artificial intelligence-based defect model. At step 622, a defect associated with the identified one or more body parts of the vehicle 108 is determined based on the artificial intelligence-based defect model. At step 624, the determined defect associated with the identified one or more body parts of the vehicle 108 is outputted as an overlayed image. Before outputting, the cropped images are concatenated (along with non-cropped mask) and analysis is done to create the final defect mask to display the overall image with body part having defect.
[63] The method 600 can be implemented in any suitable hardware, software, firmware, or combination thereof.
[64] FIG 7A-C is a schematic representation of process of generating an artificial intelligence-based defect model for identified one or more body parts of the vehicle 108, according to an embodiment of the present invention. FIG 7A-C corresponds to process for generating an artificial intelligence-based defect model for a non-uniform type of vehicle part surface. In FIG 7A, a method of identifying bumper mask as a body part of the vehicle 108 using bounding box technique is depicted. The bumper mask area is the region of interest and is highlighted. In FIG 7B, a method of determining centre point and axis of symmetry for a bumper mask is depicted. In this case, the axis of symmetry is marked as line L1. In FIG 7C, a method of segmenting the bounding box image comprising the one or more body parts into two regions along the determined centre point and across the determined axis of symmetry is depicted. The two regions are X1 and X2. Then an AI model is used for finding the dissimilarity between the segmented regions X1 and X2 , since X1 and X2 are supposed to be symmetric, any dissimilarity between the 2 suggests the presence of defect.
[65] The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims.
[66] The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various modules described herein may be implemented in other modules or combinations of other modules. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
[67] The medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium. Examples of a computer-readable medium include a semiconductor or solid-state memory, magnetic tape, a removable computer diskette, a random-access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk. Current examples of optical disks include compact disk-read only memory (CD-ROM), compact disk-read/write (CD-R/W) and DVD.
[68] Input/output (I/O) devices (including but not limited to keyboards, displays, pointing devices, etc.) can be coupled to the system either directly or through intervening I/O controllers. Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.
[69] A representative hardware environment for practicing the embodiments may include a hardware configuration of an information handling/computer system in accordance with the embodiments herein. The system herein comprises at least one processor or central processing unit (CPU). The CPUs are interconnected via system bus to various devices such as a random-access memory (RAM), read-only memory (ROM), and an input/output (I/O) adapter. The I/O adapter can connect to peripheral devices, such as disk units and tape drives, or other program storage devices that are readable by the system. The system can read the inventive instructions on the program storage devices and follow these instructions to execute the methodology of the embodiments herein.
[70] The system further includes a user interface adapter that connects a keyboard, mouse, speaker, microphone, and/or other user interface devices such as a touch screen device (not shown) to the bus to gather user input. Additionally, a communication adapter connects the bus to a data processing network, and a display adapter connects the bus to a display device which may be embodied as an output device such as a monitor, printer, or transmitter, for example.
[71] A description of an embodiment with several components in communication with each other does not imply that all such components are required. On the contrary, a variety of optional components are described to illustrate the wide variety of possible embodiments of the invention. When a single device or article is described herein, it will be apparent that more than one device/article (whether or not they cooperate) may be used in place of a single device/article. Similarly, where more than one device or article is described herein (whether or not they cooperate), it will be apparent that a single device/article may be used in place of the more than one device or article or a different number of devices/articles may be used instead of the shown number of devices or programs. The functionality and/or the features of a device may be alternatively embodied by one or more other devices which are not explicitly described as having such functionality/features. Thus, other embodiments of the invention need not include the device itself.
[72] The specification has described a method and a system for performing context-based application disablement on an electronic device. The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments. Also, the words "comprising," "having," "containing," and "including," and other similar forms are intended to be equivalent in meaning and be open-ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
[73] Finally, the language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the invention be limited not by this detailed description, but rather by any claims that issue on an application based here on. Accordingly, the embodiments of the present invention are intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.
| # | Name | Date |
|---|---|---|
| 1 | 202111019498-FER.pdf | 2021-10-19 |
| 1 | 202111019498-STATEMENT OF UNDERTAKING (FORM 3) [28-04-2021(online)].pdf | 2021-04-28 |
| 2 | 202111019498-FORM FOR SMALL ENTITY(FORM-28) [28-04-2021(online)].pdf | 2021-04-28 |
| 2 | 202111019498-FORM 18A [02-07-2021(online)].pdf | 2021-07-02 |
| 3 | 202111019498-FORM-9 [02-07-2021(online)].pdf | 2021-07-02 |
| 3 | 202111019498-FORM FOR SMALL ENTITY [28-04-2021(online)].pdf | 2021-04-28 |
| 4 | 202111019498-FORM28 [02-07-2021(online)].pdf | 2021-07-02 |
| 4 | 202111019498-FORM 1 [28-04-2021(online)].pdf | 2021-04-28 |
| 5 | 202111019498-MSME CERTIFICATE [02-07-2021(online)].pdf | 2021-07-02 |
| 5 | 202111019498-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [28-04-2021(online)].pdf | 2021-04-28 |
| 6 | 202111019498-FORM-26 [01-07-2021(online)].pdf | 2021-07-01 |
| 6 | 202111019498-EVIDENCE FOR REGISTRATION UNDER SSI [28-04-2021(online)].pdf | 2021-04-28 |
| 7 | 202111019498-Proof of Right [01-07-2021(online)].pdf | 2021-07-01 |
| 7 | 202111019498-DRAWINGS [28-04-2021(online)].pdf | 2021-04-28 |
| 8 | 202111019498-DECLARATION OF INVENTORSHIP (FORM 5) [28-04-2021(online)].pdf | 2021-04-28 |
| 8 | 202111019498-COMPLETE SPECIFICATION [28-04-2021(online)].pdf | 2021-04-28 |
| 9 | 202111019498-DECLARATION OF INVENTORSHIP (FORM 5) [28-04-2021(online)].pdf | 2021-04-28 |
| 9 | 202111019498-COMPLETE SPECIFICATION [28-04-2021(online)].pdf | 2021-04-28 |
| 10 | 202111019498-DRAWINGS [28-04-2021(online)].pdf | 2021-04-28 |
| 10 | 202111019498-Proof of Right [01-07-2021(online)].pdf | 2021-07-01 |
| 11 | 202111019498-FORM-26 [01-07-2021(online)].pdf | 2021-07-01 |
| 11 | 202111019498-EVIDENCE FOR REGISTRATION UNDER SSI [28-04-2021(online)].pdf | 2021-04-28 |
| 12 | 202111019498-MSME CERTIFICATE [02-07-2021(online)].pdf | 2021-07-02 |
| 12 | 202111019498-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [28-04-2021(online)].pdf | 2021-04-28 |
| 13 | 202111019498-FORM28 [02-07-2021(online)].pdf | 2021-07-02 |
| 13 | 202111019498-FORM 1 [28-04-2021(online)].pdf | 2021-04-28 |
| 14 | 202111019498-FORM-9 [02-07-2021(online)].pdf | 2021-07-02 |
| 14 | 202111019498-FORM FOR SMALL ENTITY [28-04-2021(online)].pdf | 2021-04-28 |
| 15 | 202111019498-FORM FOR SMALL ENTITY(FORM-28) [28-04-2021(online)].pdf | 2021-04-28 |
| 15 | 202111019498-FORM 18A [02-07-2021(online)].pdf | 2021-07-02 |
| 16 | 202111019498-STATEMENT OF UNDERTAKING (FORM 3) [28-04-2021(online)].pdf | 2021-04-28 |
| 16 | 202111019498-FER.pdf | 2021-10-19 |
| 1 | searchE_23-09-2021.pdf |