Abstract: The present disclosure relates to a system(s) and method(s) for glare reduction. The method comprises obtaining a first video stream, a dimension of a Liquid crystal display (LCD) and a location of the LCD and identifying a set of lights from of the first video stream using a machine learning based image processing methodology. The method further comprises determining one or more lights from the set of light associated with a headlight of a vehicle and above a predefined intensity and computing a location of the one or more lights on the LCD and a boundary surrounding each of the one or more lights, wherein the boundary is directly proportionate to the intensity of the one or more light. The method comprises generating an instruction for decreasing a transparency of one or more pixels in the LCD based on the location and the boundary, thereby by reducing glare.
CROSS-REFERENCE TO RELATED APPLICATIONS AND PRIORITY
[001] The present application does not claim priority from any patent application.
TECHNICAL FIELD
[002] The present disclosure in general relates to the field of safety. More particularly, the present subject matter relates to a system and a method for glare reduction.
BACKGROUND
[003] Generally, during night travel opposite side traveling vehicles head lights creates glare and halos which will block the 70% vision, and this may lead to accident while driving. Conventionally, anti-glaring glass are available in the market which are in the color of yellow or amber glass but still don’t address problem fully. Further, polarized glass available in the market which reduce the brightness of the head light beam of the opposite traveling vehicle along with this it will reduce the brightness darker sides of the road. This leads to lower visibility.
SUMMARY
[004] Before the present system and a method for glare reduction are described, it is to be understood that this application is not limited to a particular system, systems, and methodologies described, as there can be multiple possible embodiments, which are not expressly illustrated in the present disclosures. It is also to be understood that the terminology used in the description is for the purpose of describing the particular implementations, versions, or embodiments only, and is not intended to limit the scope of the present application. This summary is provided to introduce aspects related to a system and a method for glare reduction. This summary is not intended to identify essential features of the claimed subject matter nor is it intended for use in determining or limiting the scope of the claimed subject matter.
[005] In one embodiment, a method for glare reduction is disclosed. In the embodiment, the method comprises obtaining a first video stream, a dimension of a Liquid crystal display (LCD) and a location of the LCD pixel and identifying a set of lights from of the first video stream using a machine learning based image processing methodology. The method further comprises determining one or more lights from the set of light associated with a headlight of a vehicle and above a predefined intensity and computing a location of the one or more lights on the LCD and a boundary surrounding each of the one or more lights, wherein the boundary is directly proportionate to the intensity of the one or more light. The method comprises generating an instruction for decreasing a transparency of one or more pixels in the LCD based on the location and the boundary, thereby by reducing glare.
[006] In one embodiment, a system for glare reduction may be disclosed. The system comprises a memory and a processor coupled to the memory, further the processor may be configured to execute programmed instructions stored in the memory. In the embodiment, the system may obtain a first video stream from a first camera, a dimension of a Liquid crystal display (LCD) located in front of a user and a location of the LCD with reference to the first camera, and identify a set of lights from of the first video stream using a machine learning based image processing methodology. In one example the first video stream comprises environmental data. Upon identifying, the system may determine one or more lights from the set of light associated with a headlight of a vehicle and above a predefined intensity and compute a location of the one or more lights on the LCD and a boundary surrounding each of the one or more lights. In one example, the boundary may be directly proportionate to the intensity of the one or more light. Further to computing, the system may generate an instruction for decreasing a transparency of one or more pixels in the LCD based on the location and the boundary, thereby by reducing glare.
BRIEF DESCRIPTION OF DRAWINGS
[007] The foregoing detailed description of embodiments is better understood when read in conjunction with the appended drawings. For the purpose of illustrating the present subject matter, an example of construction of the present subject matter is provided as figures; however, the present subject matter is not limited to the specific method and system disclosed in the document and the figures.
[008] The present subject matter is described in detail with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the drawings to refer various features of the present subject matter.
[009] Figure 1 illustrates an implementation of a system for glare reduction, in accordance with an embodiment of the present subject matter.
[0010] Figure 2 illustrates an embodiment of the system for glare reduction, in accordance with an embodiment of the present subject matter.
[0011] Figure 3 illustrates a method for glare reduction, in accordance with an embodiment of the present subject matter.
[0012] Figure 4 illustrates a block diagram of machine learning based image processing methodology for glare reduction, in accordance with an embodiment of the present subject matter.
DETAILED DESCRIPTION
[0013] Some embodiments of this disclosure, illustrating all its features, will now be discussed in detail. The words "comprising," "having," "containing," and "including," and other forms thereof, are intended to be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms "a," "an," and "the" include plural references unless the context clearly dictates otherwise. Although any system and method for glare reduction, similar or equivalent to those described herein can be used in the practice or testing of embodiments of the present disclosure, the exemplary, system and method for glare reduction are now described.
[0014] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments for glare reduction. However, one of ordinary skill in the art will readily recognize that the present disclosure for glare reduction is not intended to be limited to the embodiments described, but is to be accorded the widest scope consistent with the principles and features described herein.
[0015] As described, the in the conventional technology, polarized glass are available in the market which will reduce the brightness of the head light beam and reduce the wearer’s visibility of the darker portions of the roadway. Conventional technology yellow or amber lenses night glass can be effective for foggy or hazy daylight conditions, they are not effective against headlight glare and should not be worn at dusk or night. Further, the conventional technology, blocks all the light sources without any conditions like its block back light, street light, signal light, low beam etc.
[0016] In the embodiment, a first video stream from a first camera, a dimension of a Liquid crystal display (LCD) located in front of a user and a location of the LCD with reference to the first camera may be obtained and a set of lights from of the first video stream may be identified using a machine learning based image processing methodology. In one example the first video stream comprises environmental data. Upon identifying, one or more lights from the set of light associated with a headlight of a vehicle and above a predefined intensity may be determined and a location of the one or more lights on the LCD and a boundary surrounding each of the one or more lights may be computed. In one example, the boundary may be directly proportionate to the intensity of the one or more light. Further to computing, an instruction for decreasing a transparency of one or more pixels in the LCD based on the location and the boundary may be generated. Further, the LCD may decrease the transparency of the pixels based on the instruction, thereby by reducing glare.
[0017] Exemplary embodiments for discussed above may provide certain more advantages. Further, in the subsequent description, embodiments of the present subject along with the advantages are explained in detail with reference to the Figures 1 A to Figure 3.
[0018] Referring now to Figure 1, embodiment of a network implementation 100 of a system 102-1…102-N, herein after refer to as 102, for glare reduction is disclosed. Although the present subject matter is explained considering that the system 102 is implemented on device 118 such a spectacle/glasses, automobile or a helmet, it may be understood that the system 102 may also be implemented in a variety of computing systems, such as a server 110 a laptop computer, a desktop computer, a notebook, a workstation, a mainframe computer, a network server, and the like. In one implementation, the system 102 may be implemented in a cloud-based environment. It will be understood that multiple users may access the system 102 through one or more user device or applications residing on the user device 104-1….104-N. Examples of the user device may include, but are not limited to, a portable computer, a personal digital assistant, a handheld system, and a workstation. The system 102 may be communicatively coupled to first cameras 112 and second camera 114.
[0019] In one implementation, the network 106 may be a wireless network, a wired network or a combination thereof. The network 106 may be implemented as one of the different types of networks, such as intranet, local area network (LAN), wide area network (WAN), the internet, and the like. The network 106 may be either a dedicated network or a shared network. The shared network represents an association of the different types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Hypertext Transfer Protocol Secure (HTTPS), Secure File Transfer Protocol (SFTP), Transmission Control Protocol/Internet Protocol (TCP/IP), Wireless Application Protocol (WAP), and the like, to communicate with one another.
[0020] In one embodiment, a system 102 for glare reduction may be disclosed. In the embodiment, the system 102 may obtain a first video stream from a first camera 114, a dimension of a Liquid crystal display (LCD) 116 located in front of a user 108 and a location of the LCD 116 with reference to the first camera 114, and identify a set of lights from of the first video stream using a machine learning based image processing methodology. In one example the first video stream comprises environmental data. Upon identifying, the system 102 may determine one or more lights from the set of light associated with a headlight of a vehicle and above a predefined intensity and compute a location of the one or more lights on the LCD and a boundary surrounding each of the one or more lights. In one example, the boundary may be directly proportionate to the intensity of the one or more light. Further to computing, the system 102 may generate an instruction for decreasing a transparency of one or more pixels in the LCD 116 based on the location and the boundary, thereby by reducing glare.
[0021] Referring now to figure 2, an embodiment of the system 102 for glare reduction is illustrated in accordance with the present subject matter. The system 102 may include at least one processor 202, an input/output (I/O) interface 204, and a memory 206. The at least one processor 202 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any systems that manipulate signals based on operational instructions. Among other capabilities, at least one processor 202 may be configured to fetch and execute computer-readable instructions stored in the memory 206.
[0022] The I/O interface 204 may include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like. The I/O interface 204 may allow the system 102 to interact with the user directly or through the user device 104. Further, the I/O interface 204 may enable the system 102 to communicate with other computing systems, such as web servers and external data servers (not shown). The I/O interface 204 may facilitate multiple communications within a wide variety of networks and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. The I/O interface 204 may include one or more ports for connecting a number of systems to one another or to another server.
[0023] The memory 206 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. The memory 206 may include modules 208 and data 210.
[0024] The modules 208 may include routines, programs, objects, components, data structures, and the like, which perform particular tasks, functions or implement particular abstract data types. In one implementation, the module 208 may include an obtaining module 212, a determining module 214, a computing module 216, a generation module 220, and other modules 224. The other modules 224 may include programs or coded instructions that supplement applications and functions of the system 102.
[0025] The data 210, amongst other things, serve as a repository for storing data processed, received, and generated by one or more of the modules 208. The data 210 may also include a system data 226, and other data 228. In one embodiment, the other data 228 may include data generated as a result of the execution of one or more modules in the other module 224.
[0026] In one implementation, a user may access the system 102 via the I/O interface 204. The user may be registered using the I/O interface 204 in order to use the system 102. In one aspect, the user may access the I/O interface 204 of the system 102 for obtaining information, providing inputs, configuring or implementing the system 102.
[0027] In one embodiment, an obtaining module 212, obtain a first video stream from a first camera 112, a dimension of a Liquid crystal display (LCD) 116 located in front of a user 108 and a location of the LCD 116 with reference to the first camera 112. In one example, the first video stream may comprise environmental data. Further, the obtaining module 212 may obtain a second video stream from a second camera 114. In one example, the second video stream may comprise user data. Further, the obtaining module 212 may store the first and the second video stream in the system data 226.
[0028] In one embodiment upon obtaining, a determining module 214, may identifying a set of lights from of the first video stream using a machine learning based image processing methodology and a gaze direction of the user based on an analysis of the second video stream using a machine learning based image processing methodology. Upon identifying, the determining module 214 may determine one or more lights from the set of light associated with a headlight of a vehicle and above a predefined intensity. Further, the determining module 214 may store one or more lights in the system data 226.
[0029] In one embodiment, a computing module 216, may compute a location of the one or more lights on the LCD and a boundary surrounding each of the one or more lights. In one example, the boundary may be directly proportionate to the intensity of the one or more light. In another example, the boundary may be a circle. In in example, the computing module 216 may for computing the boundary, identify a center on each of the one or more light, wherein the center is indicative of a highest light intensity point and determine a region around the center with an intensity of light above the predefined threshold. Further the computation module may generate the boundary comprising the region. Further, the computing module 216, may also determine a gaze location on the LCD panel based on the gaze direction and dimensions of the LCD. Furthermore, the computation module 216 may store the location, the boundary, and the gaze location in the system data 226.
[0030] In one embodiment, a generation module 220 may generate an instruction for decreasing a transparency of one or more pixels in the LCD based on the location and the boundary. Further the transparency of the LCD is reduced based on the instructions, thereby reducing glare. In one example, transparency may be decreased until one of the intensity of the one or more light associated with headlights goes below a predefined threshold, or the intensity of the one or more light is zero. In one other example, the generation module 220 may update the instruction based on the gaze location. In another example, the updating may comprises decreasing the transparency in only the region of gaze location or vision of the user. Further, the generating module 220 may store the instruction in the system data 226.
[0031] In the subsequent description one more example, of the present subject matter is disclosed.
[0032] In the example of the present subject matter consists of a goggle and its left/right glasses are made by using transparent pixel-based LCD display or thin film LCD can paste in one side of the both left/right glasses and this LCD is controlled by microprocessor w.r.t the algorithm defined. Camera is placed between the both left/right glasses and this can be controlled/capture the videos by using high speed microprocessor. Pixel based transparent LCD which is covered one side of the glasses, each pixel can be controlled by processor. Depend upon the electrical signal from the processor, LCD pixel get block the 50% – 70% transparence of the googles.
[0033] Further, in the example, while driving, camera start captures the user view as a frames. Each camera frames can be processed by using the Machine learning algorithm. Machine learning algorithm are used to find the brightest spots or the opposite vehicle head light location co-ordinates and diameter of the spots w.r.t image frame size. Machine learning algorithm are trained with light spot images and the vehicle head light image as a knowledge. So, the algorithm can able to find the location of this trained image in the captured video frames. Output of the machine learning algorithm is the coordinate location(x, y) of the head light objects and rejects all other light objects. From the user view, nearby vehicle has a bigger diameter of bright spot from the head light and distance vehicle have small diameter of bright spot, this variation can be captures by image process unit and send as a parameter to the processing unit. By using the above parameters processing unit calibrate the same w.r.t the LCD display size and enable some LCD pixels to block the vision where the bright spot located. LCD display blocking spot will block only the brightest spot (head light spot) of the vision and it won’t disturb the low light portion of the roadway. Nearby vehicle has a bigger diameter of bright spot and distance vehicle have small diameter of bright spot this can addressed by image processing and processing unit by varying the blocking spot diameter in the LCD display as an adjustable.
[0034] Furthermore in the example, camera location and eye location are in different place, so the LCD blocking spot and actual head light spot may be not sync w.r.t user view. So, this can be adjustable using calibration keys by manual w.r.t the user perspective. The above manual calibration can also be automate by placing the eyeball tracking sensors, eyeball can track by using different method like by using cameras or any eye movement detection sensors.
[0035] Further, the present system and method is trained with different objects or the light source which really want to block. Example sun is another source of light which will highly influence the driving during the sun rise and sun set, to block the same the existing machine learning algorithm can also train with Sun image data. Now the system will start blocks the sun too.
[0036] In one more example, of the present subject matter, once the image captured by camera, algorithm is used to identify only the Head light object of the Vehicles and reject all other lights objects. This can be achieved by using the machine learning algorithm like object identification and classification algorithms. These algorithms are trained with most of the lights which is available in the environment (as shown in figure 4). And it’s clearly classified with labels of “0” or “1”, where the “1” is assigned for head lights object and zero is assigned for the non-head light objects like street lights, danger light, signal light, low beam etc. These training data’s given as input to the machine learning model and the model get trained during the development stage. During run time, through the image processing, machine learning will get the captured images, and it start identifies the objects contain in the image and classify the same w.r.t the trained data. The output of the machine learning model is the head light object location coordinates parameters. Again, the coordinate parameters send as an input to the image processing module, which will identify the diameter of the brightest spot around the coordinate’s parameter of the head light object. Then the information coordinates and its diameter passed as an input parameter of the processing unit.
[0037] Exemplary embodiments for glare reduction discussed above may provide certain advantages. Though not required to practice aspects of the disclosure, these advantages, without limitation, are the following.
[0038] Some embodiments of the system and the method blocks 100% of glare/halos and this makes driver more comfortable while driving in the night time.
[0039] Some embodiments of the system and the method will identify and block only the head light not the all other light objects which is really want to observer by the driver while driving.
[0040] Some embodiments of the system and the method has the intelligence of blocking only the bright spot of the vision and it won’t disturb the low light portion of the roadway.
[0041] Some embodiments of the system and the method can be using in both day and night time driving by changing the modes.
[0042] Some embodiments of the system and the method is not only applicable for goggles this can be used in windshield glass of any vehicle like Car, Bus, and Truck etc.
[0043] Some embodiments of the system and the method can also be use in the helmet.
[0044] Referring now to figure 3, a method 300 for glare reduction using a system 102, is disclosed in accordance with an embodiment of the present subject matter. The method 300 for glare reduction using a system 102 may be described in the general context of device executable instructions. Generally, device executable instructions can include routines, programs, objects, components, data structures, procedures, modules, functions, and the like, that perform particular functions or implement particular abstract data types. The method 300 for glare reduction using a system 102 may also be practiced in a distributed computing environment where functions are performed by remote processing systems that are linked through a communications network. In a distributed computing environment, computer executable instructions may be located in both local and remote computer storage media, including memory storage systems.
[0045] The order in which the method 300 for glare reduction using a system 102 is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method 300 or alternate methods. Additionally, individual blocks may be deleted from the method 300 without departing from the spirit and scope of the subject matter described herein. Furthermore, the method 300 can be implemented in any suitable hardware, software, firmware, or combination thereof. However, for ease of explanation, in the embodiments described below, the method 300 for glare reduction using a system 102 may be considered to be implemented in the above-described system 102.
[0046] At block 302, a first video stream from a first camera, a dimension of a Liquid crystal display (LCD) located in front of a user and a location of the LCD with reference to the first camera may be obtained. In one embodiment, the obtaining module 212 may obtain the first video stream. Further, the obtaining module 212 may store the first video stream in the system data 226.
[0047] At block 304, a set of lights from of the first video stream may be identified using a machine learning based image processing methodology. In one embodiment, the determining module 214 may identify a set of lights from of the first video stream. Further, the determining module 214 may store the set of lights in the system data 226.
[0048] At block 306, one or more lights from the set of light associated with a headlight of a vehicle and above a predefined intensity may be determined. In one embodiment, the determining module 214 may determine one or more lights from the set of light associated with a headlight of a vehicle and above a predefined intensity and store the one or more lights in system data 226.
[0049] At block 308, a location of the one or more lights on the LCD and a boundary surrounding each of the one or more lights may be computed. In one embodiment, the computation module 216 may compute a location of the one or more lights on the LCD and a boundary surrounding each of the one or more lights and store the location in the system data 226.
[0050] At block 310, an instruction for decreasing a transparency of one or more pixels in the LCD based on the location and the boundary may be generated, thereby reducing glare. In one embodiment, the generation module 220 may generate an instruction for decreasing a transparency of one or more pixels in the LCD based on the location and the boundary, thereby by reducing glare and store the instruction in the system data 226.
[0051] Although implementations for methods and systems for glare reduction have been described in language specific to features, system and/or methods, it is to be understood that the appended claims are not necessarily limited to the specific features or methods for glare reduction described. Rather, the specific features and methods are disclosed as examples of implementations for glare reduction.
Claims:1. A method for glare reduction, wherein the method comprising:
obtaining, by a processor, a first video stream from a first camera, a dimension of a Liquid crystal display (LCD) located in front of a user and a location of the LCD with reference to the first camera, and wherein the first video stream comprises environmental data;
identifying, by the processor, a set of lights from of the first video stream using a machine learning based image processing methodology;
determining, by the processor, one or more lights from the set of light associated with a headlight of a vehicle and above a predefined intensity;
computing, by the processor, a location of the one or more lights on the LCD and a boundary surrounding each of the one or more lights, wherein the boundary is directly proportionate to the intensity of the one or more light; and
generating, by the processor, an instruction for decreasing a transparency of one or more pixels in the LCD based on the location and the boundary, thereby by reducing glare.
2. The method of claim 1, further comprises
obtaining, by the processor, a second video stream from a second camera, wherein the second video stream comprises user data;
identifying, by the processor, a gaze direction of the user based on an analysis of the second video stream using a machine learning based image processing methodology;
determining, by the processor, a gaze location on the LCD panel based on the gaze direction and dimensions of the LCD; and
updating, by the processor, the instruction based on the gaze location.
3. The method of claim 1 further comprises
identifying, by the processor, a center on each of the one or more light, wherein the center is indicative of a highest light intensity point,
determining, by the processor, a region around the center with an intensity of light above the predefined threshold; and
generating, by the processor, the boundary comprising the region
4. The method of claim 1, wherein the boundary is a circle.
5. The method of claim 1, wherein transparency is decreased until one of the intensity of the one or more light is below a predefined threshold, and the intensity of the one or more light is zero.
6. A system for glare reduction, wherein the system comprising:
a memory; and
a processor coupled to the memory, wherein the processor is configured to execute program instructions stored in the memory for:
obtaining a first video stream from a first camera, a dimension of a Liquid crystal display (LCD) located in front of a user and a location of the LCD with reference to the first camera, and wherein the first video stream comprises environmental data;
identifying a set of lights from of the first video stream using a machine learning based image processing methodology;
determining one or more lights from the set of light associated with a headlight of a vehicle and above a predefined intensity;
computing a location of the one or more lights on the LCD and a boundary surrounding each of the one or more lights, wherein the boundary is directly proportionate to the intensity of the one or more light; and
generating an instruction for decreasing a transparency of one or more pixels in the LCD based on the location and the boundary, thereby by reducing glare.
7. The system of claim 6, wherein transparency is decreased until one of the intensity of the one or more light is below a predefined threshold, and the intensity of the one or more light is zero.
8. The system of claim 6, further comprises
obtaining a second video stream from a second camera, wherein the second video stream comprises user data;
identifying a gaze direction of the user based on an analysis of the second video stream using a machine learning based image processing methodology;
determining a gaze location on the LCD panel based on the gaze direction and dimensions of the LCD; and
updating the instruction based on the gaze location.
9. The system of claim 6 further comprises
identifying a center on each of the one or more light, wherein the center is indicative of a highest light intensity point,
determining a region around the center with an intensity of light above the predefined threshold; and
generating the boundary comprising the region
10. The system of claim 6, wherein the boundary is a circle.
| # | Name | Date |
|---|---|---|
| 1 | 201911011619-IntimationOfGrant18-09-2023.pdf | 2023-09-18 |
| 1 | 201911011619-STATEMENT OF UNDERTAKING (FORM 3) [26-03-2019(online)].pdf | 2019-03-26 |
| 2 | 201911011619-PatentCertificate18-09-2023.pdf | 2023-09-18 |
| 2 | 201911011619-REQUEST FOR EXAMINATION (FORM-18) [26-03-2019(online)].pdf | 2019-03-26 |
| 3 | 201911011619-REQUEST FOR EARLY PUBLICATION(FORM-9) [26-03-2019(online)].pdf | 2019-03-26 |
| 3 | 201911011619-FORM-26 [15-09-2023(online)].pdf | 2023-09-15 |
| 4 | 201911011619-POWER OF AUTHORITY [26-03-2019(online)].pdf | 2019-03-26 |
| 4 | 201911011619-FER.pdf | 2021-10-18 |
| 5 | 201911011619-Proof of Right [13-10-2021(online)].pdf | 2021-10-13 |
| 5 | 201911011619-FORM-9 [26-03-2019(online)].pdf | 2019-03-26 |
| 6 | 201911011619-FORM 18 [26-03-2019(online)].pdf | 2019-03-26 |
| 6 | 201911011619-FORM 13 [09-07-2021(online)].pdf | 2021-07-09 |
| 7 | 201911011619-POA [09-07-2021(online)].pdf | 2021-07-09 |
| 7 | 201911011619-FORM 1 [26-03-2019(online)].pdf | 2019-03-26 |
| 8 | 201911011619-FIGURE OF ABSTRACT [26-03-2019(online)].jpg | 2019-03-26 |
| 8 | 201911011619-CLAIMS [14-04-2021(online)].pdf | 2021-04-14 |
| 9 | 201911011619-COMPLETE SPECIFICATION [14-04-2021(online)].pdf | 2021-04-14 |
| 9 | 201911011619-DRAWINGS [26-03-2019(online)].pdf | 2019-03-26 |
| 10 | 201911011619-COMPLETE SPECIFICATION [26-03-2019(online)].pdf | 2019-03-26 |
| 10 | 201911011619-FER_SER_REPLY [14-04-2021(online)].pdf | 2021-04-14 |
| 11 | 201911011619-OTHERS [14-04-2021(online)].pdf | 2021-04-14 |
| 11 | abstract.jpg | 2019-05-02 |
| 12 | 201911011619-Correspondence-250919.pdf | 2019-09-27 |
| 12 | 201911011619-Proof of Right (MANDATORY) [18-09-2019(online)].pdf | 2019-09-18 |
| 13 | 201911011619-OTHERS-250919.pdf | 2019-09-27 |
| 14 | 201911011619-Correspondence-250919.pdf | 2019-09-27 |
| 14 | 201911011619-Proof of Right (MANDATORY) [18-09-2019(online)].pdf | 2019-09-18 |
| 15 | 201911011619-OTHERS [14-04-2021(online)].pdf | 2021-04-14 |
| 15 | abstract.jpg | 2019-05-02 |
| 16 | 201911011619-COMPLETE SPECIFICATION [26-03-2019(online)].pdf | 2019-03-26 |
| 16 | 201911011619-FER_SER_REPLY [14-04-2021(online)].pdf | 2021-04-14 |
| 17 | 201911011619-DRAWINGS [26-03-2019(online)].pdf | 2019-03-26 |
| 17 | 201911011619-COMPLETE SPECIFICATION [14-04-2021(online)].pdf | 2021-04-14 |
| 18 | 201911011619-CLAIMS [14-04-2021(online)].pdf | 2021-04-14 |
| 18 | 201911011619-FIGURE OF ABSTRACT [26-03-2019(online)].jpg | 2019-03-26 |
| 19 | 201911011619-POA [09-07-2021(online)].pdf | 2021-07-09 |
| 19 | 201911011619-FORM 1 [26-03-2019(online)].pdf | 2019-03-26 |
| 20 | 201911011619-FORM 18 [26-03-2019(online)].pdf | 2019-03-26 |
| 20 | 201911011619-FORM 13 [09-07-2021(online)].pdf | 2021-07-09 |
| 21 | 201911011619-Proof of Right [13-10-2021(online)].pdf | 2021-10-13 |
| 21 | 201911011619-FORM-9 [26-03-2019(online)].pdf | 2019-03-26 |
| 22 | 201911011619-POWER OF AUTHORITY [26-03-2019(online)].pdf | 2019-03-26 |
| 22 | 201911011619-FER.pdf | 2021-10-18 |
| 23 | 201911011619-REQUEST FOR EARLY PUBLICATION(FORM-9) [26-03-2019(online)].pdf | 2019-03-26 |
| 23 | 201911011619-FORM-26 [15-09-2023(online)].pdf | 2023-09-15 |
| 24 | 201911011619-REQUEST FOR EXAMINATION (FORM-18) [26-03-2019(online)].pdf | 2019-03-26 |
| 24 | 201911011619-PatentCertificate18-09-2023.pdf | 2023-09-18 |
| 25 | 201911011619-IntimationOfGrant18-09-2023.pdf | 2023-09-18 |
| 25 | 201911011619-STATEMENT OF UNDERTAKING (FORM 3) [26-03-2019(online)].pdf | 2019-03-26 |
| 1 | 201911011619E_20-05-2020.pdf |