Sign In to Follow Application
View All Documents & Correspondence

A Method And A System For Predicting An Optical Property Of A Lens Unit

Abstract: The present disclosure relates to a method (900) for predicting an optical property of a lens unit (104). The method (900) may include receiving a reference image (304) of an object (106) having a predefined profile with at least one predetermined optical parameter. Further, the method (900) may include receiving a subject image (302) of the object (106) captured through the lens unit (104). Furthermore, the method (900) may include detecting, from the received subject image (302), the object (106) and at least one optical parameter associated with the detected object (106) using an image processing technique. Furthermore, the method (900) may include processing the at least one predetermined optical parameter with the at least one detected optical parameter using a machine learning (ML) model to predict the optical property. The optical property may be indicative of a lens type and a refractive power of the lens unit (104).

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
08 May 2023
Publication Number
19/2024
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application
Patent Number
Legal Status
Grant Date
2024-12-20
Renewal Date

Applicants

ALFALEUS TECHNOLOGY PRIVATE LIMITED
654, 2ND FLOOR, VIVEK VIHAR, NEW SANGANARE ROAD, JAIPUR, RAJASTHAN , INDIA

Inventors

1. Sandal Kotawala
654, 2ND FLOOR, VIVEK VIHAR, NEW SANGANARE ROAD, JAIPUR, RAJASTHAN – 302019, INDIA
2. Dr. P. Arulmozhivarman
School of Electrical Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu, India - 632014
3. Deepikaa Balaji
71/14A, Bakthapuri Street, Kumbakonam, Tamilnadu, India - 612001
4. Harika BV
27-2-286-18, Vepadoruvu S.V.G.S College, Nellore, Andhra Pradesh, India - 524002

Specification

DESC:FIELD OF THE INVENTION

The present disclosure relates to the field of optical devices. More particularly, the present disclosure relates to a method and a system for predicting an optical property of a lens unit.

BACKGROUND

Lensometers are employed by optical experts to determine a lens type and a spectacle power of the prescribed lens, such that the prescribed lens has been accurately ground to the correct prescription. Existing lensometers such as manual lensometers and automated lensometers are widely used the optical experts including optometrists, opticians, and ophthalmologists.

Currently known existing lensometers are bulky and heavy, which require significant space for installation and operation. The handling and transportation of such lensometers are difficult due to their heavy weight and bulkiness. Further, the accessibility of the existing lensometers is limited to small clinics or individual practitioners as the existing lensometers are very expensive. Furthermore, the existing lensometers require a certain level of training and expertise to operate, such that only trained and expert professionals can use such lensometers. Thus, the usage of the existing lensometers is only limited to trained and expert professionals.

Moreover, the accuracy and precision of existing lensometers can be affected by human error or technical issues, which further leads to incorrect prescriptions and reduced visual acuity for a patient. Additionally, the existing automated lensometers use cutting-edge software and algorithms to increase accuracy and quicken the measuring procedures is one approach. Such lensometers are not very accurate, and the measurement error may lead to incorrect prescriptions.

Therefore, in view of the above-mentioned problems, there is a need to provide a method and/or a system for predicting a refractive power of a lens, that can eliminate one or more above-mentioned problems associated with the existing art.

SUMMARY

This summary is provided to introduce a selection of concepts, in a simplified format, that are further described in the detailed description of the invention. This summary is neither intended to identify key or essential inventive concepts of the invention nor intended to determine the scope of the invention.

The present disclosure relates to a method for predicting an optical property of a lens unit. The method may include receiving a reference image of an object having a predefined profile with at least one predetermined optical parameter. Further, the method may include receiving a subject image of the object captured through the lens unit. Furthermore, the method may include detecting, from the received subject image, the object and at least one optical parameter associated with the detected object using an image processing technique. Furthermore, the method may include processing the at least one predetermined optical parameter with the at least one detected optical parameter using a machine learning (ML) model to predict the optical property. The optical property may be indicative of a lens type and a refractive power of the lens unit.

Further, the present disclosure relates to a system for predicting an optical property of a lens unit. The system may include a first receiving module, a second receiving module, a detecting module, and a processing module. The first receiving module may be configured to receive a reference image of an object having a predefined profile with at least one predetermined optical parameter. The second receiving module may be configured to receive a subject image of the object captured through the lens unit. The detecting module, from the received subject image, may be configured to detect the object and at least one optical parameter associated with the detected object using an image processing technique. The processing module may be configured to process the at least one predetermined optical parameter with the at least one detected optical parameter using the machine learning (ML) model to predict the optical property. The optical property is indicative of a lens type and a refractive power of the lens unit.

The system and the method of the present disclosure offer an accurate prediction of the lens type and the refractive power of the lens unit, such that one or more appropriate lenses may be prescribed to a patient. Herein, the implementation of the machine learning model accurately predicts the lens type and the refractive power of the lens unit. Further, the usage of the system and the method of the present disclosure are not limited to trained and experienced professionals. Herein, an ordinary user may utilize the system and the method without any prior training, and this improves the user experience.

To further clarify the advantages and features of the present disclosure, a more particular description of the invention will be rendered by reference to specific embodiments thereof, which is illustrated in the appended drawings. It is appreciated that these drawings depict only typical embodiments of the invention and are therefore not to be considered limiting of its scope. The invention will be described and explained with additional specificity and detail with the accompanying drawings.

BRIEF DESCRIPTION OF THE DRAWINGS

These and other features, aspects, and advantages of the present invention will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:

Figure 1 illustrates a schematic view of a lensometer setup including a user equipment, a lens unit, an optotype and a system for predicting an optical property of the lens unit, according to an embodiment of the present disclosure;
Figure 2 illustrates a side view of the user equipment, the lens unit, the optotype and an adapter of the lensometer setup, according to an embodiment of the present disclosure;
Figure 3 illustrates a front view of the user equipment depicting an image of the optotype captured through the lens unit, according to an embodiment of the present disclosure;
Figure 4 illustrates different isometric views of the adapter, according to an embodiment of the present disclosure;
Figure 5 illustrates different types of optotypes, according to an embodiment of the present disclosure;
Figure 6 illustrates a block diagram of the system for predicting an optical property of a lens unit, according to an embodiment of the present disclosure;
Figure 7 illustrates variations in the first diameter and the second diameter of the optotype after placing the lens unit between the camera and the optotype, according to an embodiment of the present disclosure;
Figure 8 illustrates a front view of the reference image and the subject image of the optotype, according to an embodiment of the present disclosure;
Figure 9 illustrates a block diagram of the system for predicting an optical property of a lens unit, according to an embodiment of the present disclosure;
Figure 10 illustrates a flow chart of a process of using lensometer setup, according to an embodiment of the present disclosure;
Figure 11 illustrates a flow chart of a method for training the machine learning model, according to an embodiment of the present disclosure;
Figure 12 illustrates an interference to train the machine learning model, according to the present disclosure;
Figure 13 illustrates a correlation heat map used in the training of the machine learning model, according to the present disclosure;
Figure 14 illustrates a block diagram of a first model, a second model and a third model selectively used to predict the refractive power of the cylindrical lens of the lens unit, according to the present disclosure; and
Figure 15 illustrates a graphical representation depicting a correlation between the refractive power and magnification or minification, according to an embodiment of the present disclosure.

Further, skilled artisans will appreciate that elements in the drawings are illustrated for simplicity and may not have necessarily been drawn to scale. For example, the flow charts illustrate the method in terms of the most prominent steps involved to help to improve understanding of aspects of the present invention. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the drawings by conventional symbols, and the drawings may show only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the drawings with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein.

DETAILED DESCRIPTION OF FIGURES

For the purpose of promoting an understanding of the principles of the invention, reference will now be made to the embodiment illustrated in the drawings and specific language will be used to describe the same. It will nevertheless be understood that no limitation of the scope of the invention is thereby intended, such alterations and further modifications in the illustrated system, and such further applications of the principles of the invention as illustrated therein being contemplated as would normally occur to one skilled in the art to which the invention relates. Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skilled in the art to which this invention belongs. The system, methods, and examples provided herein are illustrative only and not intended to be limiting.

The term “some” as used herein is defined as “none, or one, or more than one, or all.” Accordingly, the terms “none,” “one,” “more than one,” “more than one, but not all” or “all” would all fall under the definition of “some.” The term “some embodiments” may refer to one embodiment or to several embodiments or to all embodiments. Accordingly, the term “some embodiments” is defined as meaning one embodiment, or more than one embodiment, or all embodiments.”

The terminology and structure employed herein is for describing, teaching, and illuminating some embodiments and their specific features and elements and does not limit, restrict, or reduce the scope of the claims or their equivalents.

More specifically, any terms used herein such as but not limited to “includes,” “comprises,” “has,” “consists,” and grammatical variants thereof do NOT specify an exact limitation or restriction and certainly do NOT exclude the possible addition of one or more features or elements, unless otherwise stated, and furthermore must NOT be taken to exclude the possible removal of one or more of the listed features and elements, unless otherwise stated with the limiting language “MUST comprise” or “NEEDS TO include.”

Whether or not a certain feature or element was limited to being used only once, either way it may still be referred to as “one or more features” or “one or more elements” or “at least one feature” or “at least one element.” Furthermore, the use of the terms “one or more” or “at least one” feature or element do NOT preclude there being none of that feature or element, unless otherwise specified by limiting language such as “there NEEDS to be one or more . . .” or “one or more element is REQUIRED.”

Unless otherwise defined, all terms, and especially any technical and/or scientific terms, used herein may be taken to have the same meaning as commonly understood by one having an ordinary skill in the art.

Reference is made herein to some “embodiments.” It should be understood that an embodiment is an example of a possible implementation of any features and/or elements presented in the attached claims. Some embodiments have been described for the purpose of illuminating one or more of the potential ways in which the specific features and/or elements of the attached claims fulfil the requirements of uniqueness, utility, and non-obviousness.

Use of the phrases and/or terms such as but not limited to “a first embodiment,” “a further embodiment,” “an alternate embodiment,” “one embodiment,” “an embodiment,” “multiple embodiments,” “some embodiments,” “other embodiments,” “further embodiment”, “furthermore embodiment”, “additional embodiment” or variants thereof do NOT necessarily refer to the same embodiments. Unless otherwise specified, one or more particular features and/or elements described in connection with one or more embodiments may be found in one embodiment, or may be found in more than one embodiment, or may be found in all embodiments, or may be found in no embodiments. Although one or more features and/or elements may be described herein in the context of only a single embodiment, or alternatively in the context of more than one embodiment, or further alternatively in the context of all embodiments, the features and/or elements may instead be provided separately or in any appropriate combination or not at all. Conversely, any feature and/or element described in the context of separate embodiments may alternatively be realized as existing together in the context of a single embodiment.

Any particular and all details set forth herein are used in the context of some embodiments and therefore should NOT be necessarily taken as limiting factors to the attached claims. The attached claims and their legal equivalents can be realized in the context of embodiments other than the ones used as illustrative examples in the description below.

Embodiments of the present disclosure will be described below in detail with reference to the accompanying drawings.

Figure 1 illustrates a schematic view of a lensometer setup 100 including a user equipment 102, a lens unit 104, an optotype 106 and a system 108 for predicting an optical property of the lens unit 104, according to an embodiment of the present disclosure. Referring to Figure 1, the lensometer setup 100 may be installed at eye clinics, eye hospitals, and/or spectacles stores. The lensometer setup 100 may be adapted to predict the optical property of the lens unit 104. Thus, the lensometer may be used by optical experts such as optometrists, opticians, and ophthalmologists to predict the optical property of the lens unit 104, so that the lens unit 104 with the required optical property can be prescribed to a patient.

The lensometer setup 100 includes a machine learning (ML) model to predict the refractive power of the lens unit 104. Herein, the implementation of the machine learning accurately and quickly predicts the refractive power of the lens unit 104 while keeping the lensometer setup 100 compact. Thus, the lensometer setup 100 may be installed and operated within a small space. Due to the compactness of the lensometer setup 100, the handling and transportation of the lensometer setup 100 is easy.

Figure 2 illustrates a side view of the user equipment 102, the lens unit 104, the optotype 106 and an adapter 202 of the lensometer setup 100, according to an embodiment of the present disclosure. Figure 3 illustrates a front view of the user equipment 102 depicting an image of the optotype 106 captured through the lens unit 104, according to an embodiment of the present disclosure. Referring to Figures 2 and 3, the lensometer setup 100 may include, but is not limited to, a user equipment 102, a lens unit 104, an adapter 202 having an object 106, and a system 108 for predicting an optical property of the lens unit 104. In one example, the object 106 may be an optotype 106. In subsequent paragraphs, the object 106 may be interchangeably referred to as the optotype 106, without departing from the scope of the present disclosure.

The adapter 202 may include the optotype 106 and may be coupled with the user equipment 102. Further, the adapter 202 may be adapted to hold the lens unit 104. The lens unit 104 may be positioned between the optotype 106 and the user equipment 102, such that a subject image 302 of the optotype 106 may be captured through lens unit 104. The lens unit 104 may include, but is not limited to, a spherical lens and a cylindrical lens. Herein, the spherical lens and the cylindrical lens may be positioned concentrically in the adapter 202. In an embodiment, the lens unit 104 may be at least one lens of spectacles.

The user equipment 102 may include an image-capturing device such as a camera 102-1 adapted to capture the subject image 302 of the optotype 106 through the lens unit 104. Herein, the camera 102-1 of the user equipment 102 may be coupled with the adapter 202 in a manner, such that the lens unit 104 may be positioned between the camera 102-1 and the optotype 106. Constructional and functional details of the adapter 202 are explained in the subsequent paragraphs with respect to Figure 4.

Further, the user equipment 102 may include a display screen 102-2 and a switch 102-3. The switch 102-3 may adapted to be operated to capture the subject image 302 of the optotype 106 through the lens unit 104. The display screen 102-2 may be adapted to display the captured subject image 302 of the optotype 106. In an embodiment, the user equipment 102 may be a smartphone. In another embodiment, the user equipment 102 may be a computer or any other device having the camera 102-1, without departing from the scope of the present disclosure.

Figure 4 illustrates different isometric views of the adapter 202, according to an embodiment of the present disclosure. The adapter 202 may be coupled with the user equipment 102. In an embodiment, the adapter 202 may be clipped with the camera 102-1 of the user equipment 102. The adapter 202 may include a support member 402 and a holding member 404 orthogonally coupled with the support member 402. The holding member 404 may include a first end 406, a second end 408 opposite to the first end 406, and a body 410.

The first end 406 may be adapted to be clipped with the camera 102-1 of the user equipment 102. The second end 408 may be adapted to hold the optotype 106. Further, the body 410 extends between the first end 406 and the second end 408. The body 410 defines an opening 412 adapted to hold the lens unit 104. Herein, the opening 412 may be lens unit holder 412. In an embodiment, the body 410 has a frustoconical shape, without departing from the scope of the present disclosure. In an embodiment, the adapter 202 may be formed of a polymeric material, without departing from the scope of the present disclosure.

Figure 5 illustrates different types of optotype 106s, according to an embodiment of the present disclosure. The optotype 106 may be attached to the second end 408 of the holding member 404 of the adapter 202. The optotype 106 may have a predefined profile. In an embodiment, as shown in (a) of Figure 5, the optotype 106 may have a circular profile. In another embodiment, as shown in (b) of Figure 5, the optotype 106 may have a hexagonal profile. In yet another embodiment, as shown in (c) of Figure 5, the optotype 106 may have a square profile. The optotype 106 may have concentric contours.

As shown in Figure 1, the system 108 may be in communication with the user equipment 102. In an embodiment, the system 108 may be a part of the user equipment 102, without departing from the scope of the present disclosure. The system 108 may be adapted to predict the optical property of the lens unit 104. Herein, the optical property may be indicative of a lens type and a refractive power of the lens unit 104. The lens type of the lens unit 104 may be the spherical lens or the cylindrical lens. Further, the refractive power of the lens unit 104 may be positive or negative. Details of the system 108 are explained in the subsequent paragraphs with respect to Figure 6.

Figure 6 illustrates a block diagram of the system 108 for predicting the optical property of the lens unit 104, according to an embodiment of the present disclosure, according to an embodiment of the present disclosure. The system 108 may include different components that operate synergistically to predict the optical property of the lens unit 104. For instance, the system 108 may include a processor 602, a memory 604, module(s) 606, and data 608. The memory 604, in one example, may store the instructions to carry out the operations of the modules 606. The modules 606 and the memory 604 may be coupled to the processor 602.

The processor 602 can be a single processing unit or several units, all of which could include multiple computing units. The processor 602 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processor, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor 602 is configured to fetch and execute computer-readable instructions and data stored in the memory 604.

The memory 604 may include any non-transitory computer-readable medium known in the art including, for example, volatile memory 604, such as static random-access memory (SRAM) and dynamic random-access memory (DRAM), and/or non-volatile memory, such as read-only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes.

The modules 606, amongst other things, include routines, programs, objects, components, data structures, etc., which perform particular tasks or implement data types. The modules 606 may also be implemented as, signal processor 602(s), state machine(s), logic circuitries, and/or any other device or component that manipulates signals based on operational instructions.

Further, the modules 606 can be implemented in hardware, instructions executed by a processing unit, or by a combination thereof. The processing unit can comprise a computer, a processor, such as the processor 602, a state machine, a logic array, or any other suitable devices capable of processing instructions. The processing unit can be a general-purpose processor 602 which executes instructions to cause the general-purpose processor 602 to perform the required tasks or, the processing unit can be dedicated to performing the required functions. In another embodiment of the present disclosure, the modules 606 may be machine-readable instructions (software) which, when executed by a processor 602/processing unit, perform any of the described functionalities. Further, the data serves, amongst other things, as a repository for storing data processed, received, and generated by one or more of the modules 606. The data 608 may include information and/or instructions to perform activities by the processor 602.

The module(s) 606 may perform different functionalities which may include, but may not be limited to, predicting the optical property of the lens unit 104. Accordingly, the module(s) 606 may include a first receiving module 610, a second receiving module 612, a detecting module 614, a processing module 616, a corroborating module 618, and a training module 620.

In one example, the first receiving module 610 is configured to receive various types of information from the user equipment 102 via a network. For instance, the network may be a wireless network, a wired network, or a combination thereof. The network can also be an individual network or a collection of many such individual networks, interconnected with each other and functioning as a single large network, e.g., the Internet or an intranet. The network can be one of the different types of networks, such as intranet, local area network (LAN), wide area network (WAN), and the internet. The network may either be a dedicated network, a virtual network, or a shared network, which represents an association of the different types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP), and Transmission Control Protocol/Internet Protocol (TCP/IP), to communicate with each other. An example of a network may include Fiber Channel Protocol (FCP) on Fiber Channel media. In an example, the network may include a Global System 108 for Mobile Communication (GSM) network, a Universal Mobile Telecommunications System 108 (UMTS) network, or any other communication network that uses any of the commonly used protocols, for example, Hypertext Transfer Protocol (HTTP) and Transmission Control Protocol/Internet Protocol (TCP/IP).

In an embodiment, the first receiving module 610 may be configured to receive a reference image 304 of an object 106 having a predefined profile with at least one predetermined optical parameter. the first receiving module 610 may receive the reference image 304 of the object 106 from the user equipment 102. Herein, the receive the reference image 304 may be captured by the camera of the user equipment 102 and stored in the user equipment 102. In an embodiment, the at least one determined optical parameter may include one or more of the determined blurriness, determined skewness, determined kurtosis, a first diameter D1 along a first axis X-X’, and a second diameter D2 along a second axis Y-Y’.

The second receiving module 612 may be configured to receive a subject image 302 of the object 106 captured through the lens unit 104. The subject image 302 may be captured by the camera 102-1 of the user equipment 102 when the lens unit 104 is placed between the object 106 and the camera 102-1. Further, the second receiving module 612 receives the subject image 302 from the user equipment 102.

The detecting module 614 may be configured to detect the object 106 and at least one optical parameter associated with the detected object 106 using an image processing technique, from the received subject image 302. The processing module 616 may be configured to process the at least one predetermined optical parameter with the at least one detected optical parameter using a machine learning (ML) model to predict the optical property. In an embodiment, the machine learning may be a K-Nearest Neighbours (KNN) model, without departing from the scope of the present disclosure.

As mentioned before, the at least one determined optical parameter may include the first diameter D1 along the first axis X-X’ and the second diameter D2 along the second axis Y-Y’, and the object 106 has the circular profile. Herein, the first receiving module 610 may be configured to receive the reference image 304 of the object 106 and the predetermined first diameter d1 along the first axis X-X’ and the predetermined second diameter along the second axis Y-Y’, from the user equipment 102. In an embodiment, the first axis X-X’ may be a major axis, and the second axis Y-Y’ may be a minor axis.

Further, the second receiving module 612 may be configured to receive the subject image 302 of the object 106 captured through the lens unit 104. Moreover, the detecting module 614 may be configured to determine a contour of the object 106 from the received subject image 302, the first diameter D1 along the first axis X-X’, and the second diameter D2 along the second axis Y-Y’ using an image processing technique.

Figure 7 illustrates variations in the first diameter D1 and the second diameter D2 of the optotype 106 after placing the lens unit 104 between the camera 102-1 and the optotype 106, according to an embodiment of the present disclosure. Herein, the determined first and the second diameters D1, D2 of the optotype 106 captured in the subject image 302 may be changed from the predetermined first and the second diameters d1, d2 taken from the reference image 304.

As shown in (a) of Figure 7, the subject image of the optotype 106 may be magnified uniformly, when the spherical lens of the positive refractive power is placed in front of the optotype 106. Herein, the determined first diameter D1 and the determined second diameter D2 may be increased. Further, as shown in (b) of Figure 7, the subject image of the optotype 106 may be minified uniformly, when the spherical lens of the negative refractive power is placed in front of the optotype 106. Herein, the determined first diameter D2 and the determined second diameter D2 may be decreased.

As shown in (c) of Figure 7, the subject image of the optotype 106 may be magnified only along one axis and no change to the perpendicular axis, when the cylindrical lens of the positive refractive power is placed in front of the optotype 106. Herein, one of the determined first diameter D1 and the determined second diameter D2 may be increased. Furthermore, as shown in (d) of Figure 7, the subject image of the optotype 106 may be minified only along one axis and no change to the perpendicular axis, when the cylindrical lens of the negative refractive power is placed in front of the optotype 106. Herein, one of the determined first diameter D1 and the determined second diameter D2 may be decreased.

Moreover, as shown in (e) of Figure 7, the subject image of the optotype 106 may be magnified and minified only along one axis and no change to the perpendicular axis, when the spherical lens of the positive refractive power and the cylindrical lens of the negative refractive power is placed in front of the optotype 106.

Figure 8 illustrates a front view of the reference image 304 and the subject image 302 of the optotype 106, according to an embodiment of the present disclosure. Herein, the reference image 304 of the optotype 106 may be directly captured by the camera 106-1. In an embodiment, the reference image 304 may be a predefined image of the optotype 106 and stored in the user equipment 102. Further, the subject image 302 of the optotype 106 may be captured by the camera 106-1 through the lens unit 104.

The detecting module 614 determines the contour of the object 106, the first diameter D2 along the first axis X-X’, and the second diameter along the second axis Y-Y’, from the received subject image 302, using the image processing technique. Herein, the detecting module 614 may determine coordinates of each axis and pixels at same from the first axis X-X’ and the second axis Y-Y’ to determine the longest chord along each axis. The longest chord that may be parallel to the first axis X-X’ may be defined as the first diameter D1, and the longest chord along the second axis Y-Y’ may be defined as the second diameter D2.

The processing module 616 may be configured to process the predetermined first and second diameters d1, d2 with the determined first and second diameters D1, D2 using the machine learning model to predict the optical property, wherein the optical property is indicative of the lens type and the refractive power of the lens unit 104.

To process the predetermined first and second diameters d1, d2 with the determined first and second diameters D1, D2, the processing module 616 is configured to compare the determined first diameter D1 with the determined second diameter D2, using the machine learning model. Further, the processing module 616 determines that the lens type of the lens unit 104 is spherical when the determined first and second diameters D1, D2 are equal.

In an embodiment, the processing module 616 compares the determined first and second diameters D1, D2 with the predetermined first and second diameters d1, d2, using the machine learning model. Further, the processing module 616 determines that the refractive power of the spherical lens of the lens unit 104 is positive when the determined first and second diameters D1, D2 are greater than the predetermined first and second diameters d1, d2. On the other hand, the processing module 616 determines that the refractive power of the spherical lens is negative when the determined first and second diameters D1, D2 are less than the predetermined first and second diameters d1, d2.

To determine the lens type of the lens unit 104, the processing module 616 may be configured to compare the determined first diameter D1 with the determined second diameter D2, using the machine learning model. Further, the processing module 616 determines that the lens type of the lens unit 104 is cylindrical when the determined first and second diameters D1, D2 are unequal.

To predict the refractive power of the cylindrical lens, the processing module 616 may compare the determined first and second diameters D1, D2 with the predetermined first and second diameters d1, d2, using the machine learning model. Further, the processing module 616 determines the refractive power of the cylindrical lens of the lens unit 104 by using one of a first model M1, a second model M2, and a third model M3, based on the determined refractive power of the spherical lens.

In another embodiment, the at least one determined optical parameter may include the determined blurriness. Herein, the detecting module 614 may be configured to determine a degree of blurriness of the object 106 from the received subject image 302 using the image processing technique. Further, the processing module 616 may be configured to process the determined degree of blurriness using the machine learning model to predict the optical property.

In yet another embodiment, the at least one determined optical parameter may include the determined skewness. Herein, the detecting module 614 may be configured to determine a degree of skewness of the object 106 from the received subject image 302 using the image processing technique. Further, the processing module 616 may be configured to process the determined degree of skewness using the machine learning model to predict the optical property.

In one more embodiment, the at least one determined optical parameter may include the determined kurtosis. Herein, the detecting module 614 may be configured to determine a degree of kurtosis of the object 106 from the received subject image 302 using the image processing technique. Further, the processing module 616 may be configured to process the determined degree of kurtosis using the machine learning model to predict the optical property.

After processing the determined blurriness, the determined skewness, the determined kurtosis, the determined first and second diameters D1, D2. The corroborating module 618 may be configured to corroborate the optical properties determined by processing the predetermined first and second diameters d1, d2 with the determined first and second diameters D1, D2, determined blurriness, determined skewness, or determined kurtosis. Further, the detecting module 614 may be configured to determine the final the refractive power of the lens unit 104 based on the corroboration.

Figure 9 illustrates a flow chart of a method for predicting the optical property of the lens unit 104, according to an embodiment of the present disclosure. The order in which the method 900 steps are described below is not intended to be construed as a limitation, and any number of the described method 900 steps can be combined in any appropriate order to execute the method 900 or an alternative method 900. Additionally, individual steps may be deleted from the method 900 without departing from the scope of the subject matter described herein.

In an embodiment, the method 900 may be performed partially or completely by the system 108 shown in Figure 6. Prior to the beginning of the method 900, the mapping module 212 may train the artificial training model. In order to train the artificial learning model, the mapping module 212 may map the contexts with one or more components from a training set. As mentioned before, the components may include one or more images. and on or more text.

At step 902, the first receiving module 610 may receive the reference image 304 of the object 106 having the predefined profile with at least one predetermined optical parameter. In an embodiment, the first receiving module 610 may receive the reference image 304 of the object 106 from the user equipment 102. Herein, the receive the reference image 304 may be captured by the camera of the user equipment 102 and stored in the user equipment 102.

At step 904, the second receiving module 612 may receive the subject image 302 of the object 106 captured through the lens unit 104. The subject image 302 may be captured by the camera 102-1 through the lens unit 104 and send the captured subject image 302 to the second receiving module 612.

At step 906, the detecting module 614 may detect at least one optical parameter associated with the detected object 106, from the received subject image, using the image processing technique. Herein, the at least one determined optical parameter may include one or more of determined blurriness, determined skewness, determined kurtosis, the first diameter D1 along the first axis X-X’, and the second diameter D2 along the second axis Y-Y’. Further, at step 908, the processing module 616 may process the at least one predetermined optical parameter with the at least one detected optical parameter using the machine learning model to predict the optical property.

In an embodiment, the at least one determined optical parameter may include the first diameter D1 along the first axis X-X’ and the second diameter D2 along the second axis Y-Y’, and the object 106 has the circular profile. Herein, the method 900 includes receiving, by the first receiving module 610, the reference image 304 of the object 106 from the user equipment 102. Further, the method 900 includes receiving, by the second receiving module 612, the subject image 302 of the object 106 captured through the lens unit 104.

Furthermore, the method 900 includes determining, by the detecting module 614, the contour of the object 106 from the received subject image 302, the first diameter D1 along the first axis X-X’, and the second diameter along the second axis Y-Y’ using the image processing technique. Moreover, the method 900 includes processing, by the processing module 616, the predetermined first and second diameters d1, d2 with the determined first and second diameters D1, D2 using the machine learning model to predict the optical property.

To determine the lens type of the lens unit, the processing includes comparing the determined first diameter D1 with the determined second diameter D2, using the machine learning model. Further, the lens type of the lens unit may be determined as spherical when the determined first and second diameters D1, D2 are equal.

To determine the spherical refractive power, the processing includes comparing, using the machine learning model, the determined first and second diameters D1, D2 with the predetermined first and second diameters d1, d2. Further, the processing includes determining that the refractive power of the spherical lens of the lens unit is positive when the determined first and second diameters D1, D2 are greater than the predetermined first and second diameters d1, d2. On the other hand, the processing includes determining that the refractive power of the spherical lens is negative when the determined first and second diameters D1, D2 are less than the predetermined first and second diameters d1, d2.

To determine the lens type of the lens unit, the processing includes comparing, using the machine learning model, the determined first diameter D1 with the determined second diameter D2. Further, the lens type of the lens unit may be determined as cylindrical when the determined first and second diameters D1, D2 are unequal.

To determine the spherical refractive power, the processing includes the comparing, using the machine learning model, the determined first and second diameters D1, D2 with the predetermined first and second diameters d1, d2. Further, the processing includes determining the refractive power of the cylindrical lens of the lens unit, by using one of the first model M1, the second model M2, and the third model M3, based on the determined refractive power of the spherical lens. Herein, one of the first model M1, the second model M2, and the third model M3 may be selectively used to predict the refractive power of the cylindrical lens, based on the determined refractive power of the spherical lens.

In an embodiment, the at least one determined optical parameter may include blurriness. Herein, the method 900 includes determining, by the detecting module 614, a degree of blurriness of the object 106 from the received subject image using the image processing technique. Further, the method 900 includes processing, by the processing module 616, the determined degree of blurriness using the machine learning model to predict the optical property.

In another embodiment, the at least one determined optical parameter may include skewness. Herein, the method 900 includes determining, by the detecting module 614, a degree of skewness of the object 106 from the received subject image using the image processing technique. Further, the method 900 includes processing, by the processing module 616, the determined degree of skewness using the machine learning model to predict the optical property.

In yet another embodiment, the at least one determined optical parameter may include kurtosis. Herein, the method 900 includes determining, by the detecting module 614, a degree of kurtosis of the object 106 from the received subject image using the image processing technique. Further, the method 900 includes processing, by the processing module 616, the determined degree of kurtosis using the machine learning model to predict the optical property.

After processing the determined first and second diameters D1, D2, determined blurriness, determined skewness, and determined kurtosis by using the machine learning model, the method 900 includes, by the corroborating module 218, corroborating the optical properties determined by processing of the predetermined first and second diameters d1, d2 with the determined first and second diameters D1, D2, determined blurriness, determined skewness, or determined kurtosis. Further, the method 900 includes determining, by the predicting module 216, the final the refractive power of the lens unit based on the corroboration.

Figure 10 illustrates a flow chart of a process 1000 of using lensometer setup 100, according to an embodiment of the present disclosure. The order in which the method 1000 steps are described below is not intended to be construed as a limitation, and any number of the described method 1000 steps can be combined in any appropriate order to execute the method 1000 or an alternative method 1000. Additionally, individual steps may be deleted from the method 1000 without departing from the scope of the subject matter described herein.

The method 1000 begins at step 1002, at which the adapter 202 may be attached with the user equipment 102. Herein, the first end 406 of the holding member 404 of the adapter 202, may be clipped with the camera 102-1 of the user equipment 102. Further, at step 1004, the camera 102-1 may capture the reference image 304 of the optotype 106 coupled to the second end 408 of the holding member 404.

At step 1006, the lens unit 104 may be placed in the opening 412 of the adapter 202. Herein, the lens unit 104 may be placed in between the camera 102-1 and the optotype 106. In an embodiment, the lens unit 104 may include at least one of the spherical lens and the cylindrical lens.

At step 1008, the camera 102-1 may capture the subject image 302 of the optotype 106 through the lens unit 104. Herein, the at least one optical parameter of the optotype 106 in the subject image 302 may be changed with respect to the at least one optical parameter in the reference image 304.

Further, at step 1010, the system 108 may validate the captured subject image 302 of the optotype 106. If the captured subject image 302 may be captured properly, the process moves forward to the next step 1012. If the subject image 302 is not captured properly, the process moves backward to the next step 1008, and the camera 102-1 recaptured the subject image 302 of the optotype 106.

At step 1012, the determined optical parameter of the reference image 304 and the subject image 302 may be processed in the machine learning. Before processing, the machine learning model may be trained to predict the optical property of the lens unit 104. In the subsequent paragraphs, the details related to the training of the machine learning model are explained.

The machine learning model may be a trained model. Herein, the training module 620 may be configured to train the machine learning model. The training module 620 may be configured to receive a plurality of reference images of the object 106 annotated by a predefined refractive power of the spherical lens and the cylindrical lens. Further, the training module 620 may be configured to determine first and second diameters of the object 106 from the received plurality of reference images. Furthermore, the training module 620 may be configured to train the machine learning model based on the determined first and second diameters of the object 106.

Figure 11 illustrates a flow chart of a method 1100 for training the machine learning model, according to an embodiment of the present disclosure. The order in which the method 1100 steps are described below is not intended to be construed as a limitation, and any number of the described method 1100 steps can be combined in any appropriate order to execute the method 1100 or an alternative method 1100. Additionally, individual steps may be deleted from the method 1100 without departing from the scope of the subject matter described herein.

In an embodiment, the method 1100 may be performed by the training module 620 of the system 108 shown in Figure 6. The method 1100 begins with a step 1102, at which the training module 620 may receive a plurality of reference images 302 of the object 106 annotated by a predefined refractive power of the spherical lens and the cylindrical lens. Further, the method includes determining 1104, by the training module 620, the first and second diameters of the object 106 from the received plurality of reference images. Moreover, the method includes training 1106, by the training module 620, the machine learning model based on the determined first and second diameters D1, D2 of the object 106.

Figure 12 illustrates an interference to train the machine learning model, according to the present disclosure. Herein, the at least one optical parameter may be fed as an input into the hidden layers of the machine learning model for analysis. The at least one optical parameter includes the predetermined blurriness 1202, the predetermined skewness 1204, the determined kurtosis 1206, the predetermined angle 1208, and the determined first and second diameters D1, D2. Further, the machine learning model may be trained to generate the refractive power 1210, 1212 of the spherical lens and/or the cylindrical lens and the angle 1214 of the cylindrical lens, as an output.

Figure 13 illustrates a correlation heat map 1300 used in the training of the machine learning model, according to the present disclosure. The correlation heat map 1300 may be used to train the machine learning model. The correlation heat map 1300 represents the correlation between the refractive power of the spherical lens and maximum diameter D1, and the refractive power of the cylindrical lens and minimum diameter D1.

Figure 14 illustrates a block diagram of the first model M1, the second model M2 and the third model M3 selectively used to predict the refractive power of the cylindrical lens of the lens unit 104, according to the present disclosure. Herein, one model from among the first model M1, the second model M2 and the third model M3, may be selected to predict the refractive power of the cylindrical lens, based on the based on the determined refractive power of the spherical lens. In an example, the first model M1 may be selected when the determined refractive power of the spherical lens is greater than 0. Further, the second model may be selected when the determined refractive power of the spherical lens is in a range from 0 to -3.00. Furthermore, the third model may be selected when the determined refractive power of the spherical lens may be less than -3.00.

Figure 15 illustrates a graphical representation depicting a correlation between the refractive power and magnification or minification, according to the present disclosure. The refractive power of the lens 104 having a magnification or minification effect on the optotype 106. The graphical representation shows that how changes in the refractive power of the lens unit 104 correspond to changes in the magnification or minification of the optotype.

In the present disclosure, the system 108 and the method 900 offer an accurate prediction of the lens type and the refractive power of the lens unit 104, such that an appropriate prescription may be provided to a patient. Herein, the implementation of the machine learning model accurately predicts the lens type and the refractive power of the lens unit. Further, the implementation of the machine learning model makes the system 108 convenient for users having average skills. Thus, the usage of the system and the method of the present disclosure are not limited to trained and experienced professionals. The user may utilize the system and the method without any prior training, and this improves the user experience. Moreover, the system 108 may be part of the user equipment 102, and the user equipment 102 may be coupled with the adapter 202 having the optotype 106. Thus, the lensometer step 100 of the present disclosure, is not heavy and bulky. Therefore, the handling and transportation of the lensometer setup 100 is easy.

While specific language has been used to describe the present subject matter, any limitations arising on account thereto, are not intended. As would be apparent to a person in the art, various working modifications may be made to the method in order to implement the inventive concept as taught herein. The drawings and the foregoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment. ,CLAIMS:1. A method (900) for predicting an optical property of a lens unit (104), the method (900) comprising:
receiving (902) a reference image (304) of an object (106) having a predefined profile with at least one predetermined optical parameter;
receiving (904) a subject image (302) of the object (106) captured through the lens unit (104);
detecting (906), from the received subject image (302), the object (106) and at least one optical parameter associated with the detected object (106) using an image processing technique; and
processing (908) the at least one predetermined optical parameter with the at least one detected optical parameter using a machine learning (ML) model to predict the optical property, wherein the optical property is indicative of a lens type and a refractive power of the lens unit (104).

2. The method (900) as claimed in claim 1, wherein the at least one determined optical parameter comprises one or more of determined blurriness, determined skewness, determined kurtosis, a first diameter (D1) along a first axis (X-X’), and a second diameter (D2) along a second axis (Y-Y’).

3. The method (900) as claimed in claim 2, wherein the method (900) comprising:
receiving the reference image (304) of the object (106) having the predefined profile being a circular profile and the at least one predetermined optical parameter being the predetermined first diameter (d1) along the first axis (X-X’) and the predetermined second diameter (d2) along the second axis (Y-Y’);
receiving the subject image (302) of the object (106) captured through the lens unit (104);
determining a contour of the object (106) from the received subject image (302), the first diameter (D1) along the first axis (X-X’), and the second diameter (D2) along the second axis (Y-Y’) using the image processing technique; and
processing the predetermined first and second diameters (d1, d2) with the determined first and second diameters (D1, D2) using the machine learning model to predict the optical property, wherein the optical property is indicative of the lens type and the refractive power of the lens unit (104).

4. The method (900) as claimed in claim 3, wherein the processing (908) comprises:
comparing, using the machine learning model, the determined first diameter with the determined second diameter; and
determining that the lens type of the lens unit (104) is spherical when the determined first and second diameters (D1, D2) are equal.

5. The method (900) as claimed in claims 1 or 4, wherein the processing (908) comprises:
comparing, using the machine learning model, the determined first and second diameters (D1, D2) with the predetermined first and second diameters (d1, d2);
determining that the refractive power of the spherical lens of the lens unit (104) is positive when the determined first and second diameters (D1, D2) are greater than the predetermined first and second diameters (d1, d2); and/or
determining that the refractive power of the spherical lens is negative when the determined first and second diameters (D1, D2) are less than the predetermined first and second diameters (d1, d2).

6. The method (900) as claimed in claim 3, wherein the processing (908) comprises:
comparing, using the machine learning model, the determined first diameter with the determined second diameter; and
determining that the lens type of the lens unit (104) is cylindrical when the determined first and second diameters (D1, D2) are unequal.

7. The method (900) as claimed in claims 2 or 6, wherein the processing (908) comprises:
comparing, using the machine learning model, the determined first and second diameters (D1, D2) with the predetermined first and second diameters (d1, d2);
determining the refractive power of the cylindrical lens of the lens unit (104), by using one of a first model (M1), a second model (M2), and a third model (M3), based on the determined refractive power of the spherical lens.

8. The method (900) as claimed in claim 3, comprising:
receiving a plurality of reference images of the object (106) annotated by a predefined refractive power of the spherical lens and the cylindrical lens;
determining the first and second diameters of the object (106) from the received plurality of reference images; and
training the machine learning model based on the determined first and second diameters (D1, D2) of the object (106).

9. The method (900) as claimed in claim 2, comprising:
determining a degree of blurriness of the object (106) from the received subject image (302) using the image processing technique; and
processing the determined degree of blurriness using the machine learning model to predict the optical property, wherein the optical property is indicative of the lens type and the refractive power of the lens unit (104).

10. The method (900) as claimed in claim 2, comprising:
determining a degree of skewness of the object (106) from the received subject image (302) using the image processing technique; and
processing the determined degree of skewness using the machine learning model to predict the optical property, wherein the optical property is indicative of the lens type and the refractive power of the lens unit (104).

11. The method (900) as claimed in claim 2, comprising:
determining a degree of kurtosis of the object (106) from the received subject image (302) using the image processing technique; and
processing the determined degree of kurtosis using the machine learning model to predict the optical property, wherein the optical property is indicative of the lens type and the refractive power of the lens unit (104).

12. The method (900) as claimed in claims 3, 9, 10, or 11, comprising:
corroborating the optical properties determined by processing of the predetermined first and second diameters (d1, d2) with the determined first and second diameters (D1, D2), determined blurriness, determined skewness, or determined kurtosis; and
determining the final the refractive power of the lens unit (104) based on the corroboration.

13. A system (108) for predicting an optical property of a lens unit (104), the system (108) comprising:
a first receiving module (610) configured to receive a reference image (304) of an object (106) having a predefined profile with at least one predetermined optical parameter;
a second receiving module (612) configured to receive a subject image (302) of the object (106) captured through the lens unit (104);
a detecting module (614), from the received subject image (302), configured to detect the object (106) and at least one optical parameter associated with the detected object (106) using an image processing technique; and
a processing module (616) configured to process the at least one predetermined optical parameter with the at least one detected optical parameter using a machine learning (ML) model to predict the optical property, wherein the optical property is indicative of a lens type and a refractive power of the lens unit (104).

14. The system (108) as claimed in claim 13, wherein the at least one determined optical parameter comprises one or more of determined blurriness, determined skewness, determined kurtosis, a first diameter along a first axis (X-X’), and a second diameter along a second axis (Y-Y’).

15. The system (108) as claimed in claim 14, the system (108) comprising:
the first receiving module (610) configured to receive the reference image (304) of the object (106) having the predefined profile being a circular profile and the at least one predetermined optical parameter being the predetermined first diameter (d1) along the first axis (X-X’) and the predetermined second diameter (d2) along the second axis (Y-Y’);
the second receiving module (612) configured to receive a subject image (302) of the object (106) captured through the lens unit (104);
the detecting module (614) configured to determine a contour of the object (106) from the received subject image (302), the first diameter along the first axis (X-X’), and the second diameter along the second axis (Y-Y’) using an image processing technique; and
the processing module (616) configured to process the predetermined first and second diameters (d1, d2) with the determined first and second diameters (D1, D2) using the machine learning model to predict the optical property, wherein the optical property is indicative of the lens type and the refractive power of the lens unit (104).

16. The system (108) as claimed in claim 15, wherein to process the predetermined first and second diameters (d1, d2) with the determined first and second diameters (D1, D2), the processing module (616) is configured to:
compare, using the machine learning model, the determined first diameter with the determined second diameter; and
determine that the lens type of the lens unit (104) is spherical when the determined first and second diameters (D1, D2) are equal.

17. The system (108) as claimed in claims 15 or 16, wherein to process the predetermined first and second diameters (d1, d2) with the determined first and second diameters (D1, D2), the processing module (616) is configured to:
compare, using the machine learning model, the determined first and second diameters (D1, D2) with the predetermined first and second diameters (d1, d2);
determine that the refractive power of the spherical lens of the lens unit (104) is positive when the determined first and second diameters (D1, D2) is greater than the predetermined first and second diameters (d1, d2); and/or
determine that the refractive power of the spherical lens is negative when the determined first and second diameters (D1, D2) is less than the predetermined first and second diameters (d1, d2).

18. The system (108) as claimed in claim 17, wherein to process the predetermined first and second diameters (d1, d2) with the determined first and second diameters (D1, D2), the processing module (616) is configured to:
compare, using the machine learning model, the determined first diameter with the determined second diameter; and
determine that the lens type of the lens unit (104) is cylindrical when the determined first and second diameters (D1, D2) are unequal.

19. The system (108) as claimed in claims 17 or 18, wherein to process the predetermined first and second diameters (d1, d2) with the determined first and second diameters (D1, D2), the processing module (616) is configured to:
compare, using the machine learning model, the determined first and second diameters (D1, D2) with the predetermined first and second diameters (d1, d2); and
determine the refractive power of the cylindrical lens of the lens unit (104) by using one of a first model (M1), a second model (M2), and a third model (M3), based on the determined refractive power of the spherical lens.

20. The system (108) as claimed in claim 15, comprising a training module configured to:
receive a plurality of reference images of the object (106) annotated by a predefined refractive power of the spherical lens and the cylindrical lens;
determine first and second diameters of the object (106) from the received plurality of reference images; and
train the machine learning model based on the determined first and second diameters (D1, D2) of the object (106).

21. The system (108) as claimed in claim 14, wherein:
the detecting module (614) is configured to determine a degree of blurriness of the object (106) from the received subject image (302) using the image processing technique; and
the processing module (616) is configured to process the determined degree of blurriness using the machine learning model to predict the optical property, wherein the optical property is indicative of the lens type and the refractive power of the lens unit (104).

22. The system (108) as claimed in claim 14, wherein:
the detecting module (614) is configured to determine a degree of skewness of the object (106) from the received subject image (302) using the image processing technique; and
the processing module (616) is configured to process the determined degree of skewness using the machine learning model to predict the optical property, wherein the optical property is indicative of the lens type and the refractive power of the lens unit (104).

23. The system (108) as claimed in claim 14, wherein:
the detecting module (614) is configured to determine a degree of kurtosis of the object (106) from the received subject image (302) using the image processing technique; and
the processing module (616) is configured to process the determined degree of kurtosis using the machine learning model to predict the optical property, wherein the optical property is indicative of the lens type and the refractive power of the lens unit (104).

24. The system (108) as claimed in claims 15, 21, 22, or 23, comprising:
a corroborating module configured to corroborate the optical properties determined by processing of the predetermined first and second diameters (d1, d2) with the determined first and second diameters (D1, D2), determined blurriness, determined skewness, or determined kurtosis; and
the detecting module (614) configured to determine the final the refractive power of the lens unit (104) based on the corroboration.

Documents

Application Documents

# Name Date
1 202311032378-STATEMENT OF UNDERTAKING (FORM 3) [08-05-2023(online)].pdf 2023-05-08
2 202311032378-PROVISIONAL SPECIFICATION [08-05-2023(online)].pdf 2023-05-08
3 202311032378-PROOF OF RIGHT [08-05-2023(online)].pdf 2023-05-08
4 202311032378-POWER OF AUTHORITY [08-05-2023(online)].pdf 2023-05-08
5 202311032378-FORM FOR STARTUP [08-05-2023(online)].pdf 2023-05-08
6 202311032378-FORM FOR SMALL ENTITY(FORM-28) [08-05-2023(online)].pdf 2023-05-08
7 202311032378-FORM 1 [08-05-2023(online)].pdf 2023-05-08
8 202311032378-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [08-05-2023(online)].pdf 2023-05-08
9 202311032378-EVIDENCE FOR REGISTRATION UNDER SSI [08-05-2023(online)].pdf 2023-05-08
10 202311032378-DRAWINGS [08-05-2023(online)].pdf 2023-05-08
11 202311032378-Request Letter-Correspondence [15-09-2023(online)].pdf 2023-09-15
12 202311032378-Power of Attorney [15-09-2023(online)].pdf 2023-09-15
13 202311032378-FORM28 [15-09-2023(online)].pdf 2023-09-15
14 202311032378-Form 1 (Submitted on date of filing) [15-09-2023(online)].pdf 2023-09-15
15 202311032378-Covering Letter [15-09-2023(online)].pdf 2023-09-15
16 202311032378-POA [23-02-2024(online)].pdf 2024-02-23
17 202311032378-OTHERS [23-02-2024(online)].pdf 2024-02-23
18 202311032378-FORM FOR STARTUP [23-02-2024(online)].pdf 2024-02-23
19 202311032378-FORM 13 [23-02-2024(online)].pdf 2024-02-23
20 202311032378-EVIDENCE FOR REGISTRATION UNDER SSI [23-02-2024(online)].pdf 2024-02-23
21 202311032378-ENDORSEMENT BY INVENTORS [23-02-2024(online)].pdf 2024-02-23
22 202311032378-DRAWING [23-02-2024(online)].pdf 2024-02-23
23 202311032378-CORRESPONDENCE-OTHERS [23-02-2024(online)].pdf 2024-02-23
24 202311032378-COMPLETE SPECIFICATION [23-02-2024(online)].pdf 2024-02-23
25 202311032378-AMENDED DOCUMENTS [23-02-2024(online)].pdf 2024-02-23
26 202311032378-FORM-9 [25-04-2024(online)].pdf 2024-04-25
27 202311032378-FORM FOR STARTUP [25-04-2024(online)].pdf 2024-04-25
28 202311032378-EVIDENCE FOR REGISTRATION UNDER SSI [25-04-2024(online)].pdf 2024-04-25
29 202311032378-STARTUP [27-05-2024(online)].pdf 2024-05-27
30 202311032378-FORM28 [27-05-2024(online)].pdf 2024-05-27
31 202311032378-FORM FOR STARTUP [27-05-2024(online)].pdf 2024-05-27
32 202311032378-FORM 18A [27-05-2024(online)].pdf 2024-05-27
33 202311032378-EVIDENCE FOR REGISTRATION UNDER SSI [27-05-2024(online)].pdf 2024-05-27
34 202311032378-FER.pdf 2024-06-24
35 202311032378-OTHERS [06-08-2024(online)].pdf 2024-08-06
36 202311032378-FORM-8 [06-08-2024(online)].pdf 2024-08-06
37 202311032378-FER_SER_REPLY [06-08-2024(online)].pdf 2024-08-06
38 202311032378-CLAIMS [06-08-2024(online)].pdf 2024-08-06
39 202311032378-US(14)-HearingNotice-(HearingDate-18-11-2024).pdf 2024-10-11
40 202311032378-FORM-26 [15-11-2024(online)].pdf 2024-11-15
41 202311032378-Correspondence to notify the Controller [15-11-2024(online)].pdf 2024-11-15
42 202311032378-Written submissions and relevant documents [30-11-2024(online)].pdf 2024-11-30
43 202311032378-PatentCertificate20-12-2024.pdf 2024-12-20
44 202311032378-IntimationOfGrant20-12-2024.pdf 2024-12-20
45 202311032378-Response to office action [06-06-2025(online)].pdf 2025-06-06
46 202311032378-Response to office action [12-08-2025(online)].pdf 2025-08-12

Search Strategy

1 search_strategy_E_31-05-2024.pdf

ERegister / Renewals

3rd: 16 Apr 2025

From 08/05/2025 - To 08/05/2026