Abstract: A system and method for virtual superimposition of a spectacle on the user’s face is disclosed. The system includes a computing entity (108). The computing entity (108) includes includes one or more modules as follows. An end user image receiving module (202) receives and uncompresses an image of the end user (102). A face and eyes detecting module (204) detects the face and the eyes of the end user (102). An inter Pupillary Distance calculating module (206) calculates an inter pupillary distance and a point of superimposition. A transparency image accessing module (210) accesses a transparency image of the spectacle stored. A reflection image accessing module (212) accesses a reflection image of the spectacle. A side light entry image fetching module (214) fetches a side light entry image. A superimposition parameters and spectacle images transmitting module (216) transmits parameters for superimposition and one or more spectacle images.
SYSTEM AND METHOD FOR VIRTUAL SUPERIMPOSITION OF A SPECTACLE ON THE USER’S FACE
BACKGROUND Technical Field
[0001] The embodiments herein generally relate to the field of image processing & computer vision and more particularly, to a system and a method for virtual superimposition of a spectacle on the user’s face and generating a superimposed image of a user wearing spectacles.
Description of the Related Art
[0002] In recent years, people increasingly prefer buying personal items such as clothes, shoes, bags, and even spectacles online, as it is convenient and cost effective. The key aspect that motivates a user to buy a specific spectacle frame is, how the user looks while wearing the same. There are websites today that allow the user to find out how he or she looks with a spectacle frame. Likewise, there are kiosks, and point of sale stations that allow a similar experience. However, the main limitation of such solutions is that the generated image of the user with the spectacle frame superimposed on his or her face is not realistic enough. Accordingly, there remains a need for a system and a method that will enable the user to accurately visualize his or her appearance without physically trying on various spectacles, influencing their decisions to buy or use them.
SUMMARY
[0003] In view of a foregoing, an embodiment herein provides A system for virtual superimposition of a spectacle on a face of an end user. The system includes (a) a memory unit stores a database, and a set of modules, and (b) a processor that executes the set of modules. The set of modules includes a computing entity. The computing entity includes an end user image receiving module, a face and eyes detecting module, an Inter Pupillary Distance calculating module, a sizing module, a transparency image accessing module, a reflection image accessing module, a side light entry image fetching module, and a superimposition parameters and spectacle images transmitting module. The end user image receiving module that (i) receives and (ii) uncompresses an image of the end user received from an end user communicating device. The face and eyes detecting module that detects the
face and the eyes of the end user in the image of the end user. The Inter Pupillary Distance
calculating module that calculates an inter pupillary distance and a point of superimposition
of the spectacle on the image of the end user. The sizing module that calculates a resizing
ratio for the spectacle on the image using pixel to real world conversion factors. The
transparency image accessing module that accesses a transparency image of the spectacle that
is stored in the database. The transparency image accessing module accesses the
transparency images in red (TR), green (TG), and blue (TB) channels of the spectacle that is stored in the database. The transparency image of the spectacle determines an amount of light that is allowed to be passed through the spectacle. The reflection image accessing module that accesses a reflection image of the spectacle that is stored in the database. The reflection image accessing module accesses the reflection images in red (RR), green (RG), and blue (RB) channels of the spectacle that is stored in the database. The reflection image of the spectacle determines an amount of light that is reflected off the surface of the spectacle. The side light entry image fetching module that fetches a side light entry image that is stored in the database. The side light entry image fetching module accesses the side light entry images in red (SR), green (SG), and blue (SB) channels of the spectacle that is stored in the database. The side light entry image accounts for the light entering from the sides of the spectacle. The superimposition parameters and spectacle images transmitting module that transmits parameters for superimposition and one or more spectacle images to the end user communicating device.
[0004] In an embodiment, the end user communicating device includes an end user interface device, an image superimposition module, a spectacle image and a data receiving module. The end user interface device captures the image of the end user. The image superimposition module performs resizing operation of the spectacle based on the superimposition parameters received in the spectacle image and data receiving module. The sizing operation includes of superimposed markers with two horizontal or two vertical (artificial) lines visible to the end user (102). The end user zooms in or out the image of the end user until the standard card held at the same distance as the face of the end user from the end user interface module (112), perfectly aligns with the two horizontal or the two vertical (artificial) lines. The calculation of Pixel to Real World Factor of the end user (102) is performed in the Sizing Module, wherein an image of the end user with a standard card held
at the same distance as the face of the end user, is processed to automatically detect the edges and corners of the standard card. The sizing module, correlates the real-world size of a standard sized card with length in pixels from the captured image, of the end user, hence giving a correlation factor which will be further required to resize the spectacle accurately.
[0005] In another embodiment, the spectacle image and the data receiving module receives superimposition parameters and one or more spectacle images from the computing entity.
[0006] In yet another embodiment, wherein the image super imposition module includes a spectacle transmissivity obtaining module, and a final super imposed image obtaining module. The spectacle transmissivity obtaining module that multiplies the transparency image of the spectacle with the side light entry image to obtain the final transparency image and then multiplies the final transparency image with the end user image to account for the transmissivity of the spectacle. The final super imposed image obtaining module that adds the reflection image of the spectacle with the transparency processed user image, to obtain a final superimposed image. The end user interface module displays the final superimposed image of the end user.
[0007] In yet another embodiment, the system includes a network. The images generated by the end user communicating device is transmitted to the computing entity, and the images and the data generated by the computing entity is transmitted to the end user communicating device.
[0008] In one aspect, method of virtual superimposition of a spectacle on a face of an end user is provided. The method includes the following steps: (a) capturing an image of an end user; (b) compressing the image of the end user; (c) receiving, using a computing entity, the compressed image of the end user; (d) uncompressing the compressed image of the end user using a computing entity; (e) detecting the face and the eyes of the end user in the uncompressed image of the end user; (f) calculating the Inter-Pupillary Distance and a point of superimposition of the spectacle on the uncompressed image of the end user; (g) calculating resizing ratio of a spectacle image using the pixel to real world conversion factors; (h) sending the final computed parameters to an end user communicating device for superimposition, wherein the final computed parameters includes a point of superimposition on the face, and an angle of inclination of the spectacle; (i) multiplying a transparency image
of the spectacle with an input image of the end user to account for the transmissivity of the spectacle; (j) adding a reflection image of the spectacle with the transparency processed user image to obtain a final superimposed image; and (k) displaying the final superimposed image on the end user communicating device.
[0009] These and other aspects of the embodiments herein will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings. It should be understood, however, that the following descriptions, while indicating preferred embodiments and numerous specific details thereof, are given by way of illustration and not of limitation. Many changes and modifications may be made within the scope of the embodiments herein without departing from the spirit thereof, and the embodiments herein include all such modifications.
BRIEF DESCRIPTION OF THE DRAWINGS
[0010] The embodiments herein will be better understood from the following detailed descriptions with reference to the drawings, wherein:
[0011] FIG. 1 illustrates a system view of an end user capturing his or her image on an end user communicating device and transmitting one or more user images to a computing entity through a network or in the end user’s device according to an embodiment herein;
[0012] FIG. 2 is an exploded view of a computing entity of FIG. 1 according to an embodiment herein;
[0013] FIG. 3 is a flow diagram illustrating a process of an image processing & computer vision modules in the computing entity of FIG. 1 according to an embodiment herein;
[0014] FIG. 4A and 4B are flow diagrams illustrating a process of capturing one or more end user images, computation of superimposition parameters using computer vision algorithms and the process of superimposition of the spectacle on face of the end user 102 of FIG. 1 according to an embodiment herein.
[0015] FIG. 5 is a flow diagram illustrating a process of generating a dynamic shadow based on the illumination on a face of the end user 102 according to an embodiment herein;
[0016] FIG. 6 illustrates an exploded view of a typical receiver according to the embodiments herein; and
[0017] FIG. 7 a schematic diagram of the end user communicating device/the computing entity used in accordance with the embodiment herein.
DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
[0018] The embodiments herein and the various features and advantageous details are explained more with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and details in the following description. Descriptions of well-known components and processing techniques are omitted so as they unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed for limiting the scope of the embodiments herein.
[0019] As mentioned, there remains a need for a system to accurately visualize the appearance of an end user when wearing spectacles. The embodiments herein achieve this by providing a technology that renders a good color reproduction of the transparency and reflection of the spectacle under present illumination conditions, taking into account the interaction of the end user's facial skin tone with the spectacle color. The model parameters of each of the spectacles (prescription glasses and sunglasses) are generated and uploaded online into a database. The computing entity receives one or more images of an end user, completes the superposition parameter calculation and sends the computed parameters to an end user communicating device in the embodiment herein. In one embodiment, the computing entity completes all processing (which includes rendering the effect of transmissivity and reflectivity of the glass and executes the superimposition algorithm) and then sends a final compressed image of the end user to an end user communicating device. The end user communicating device displays the final compressed superimposed image to the end user.
[0020] Various embodiments disclosed herein provide a system and a method for virtual superimposition of a spectacle on the user’s face and generating a superimposed image of a user wearing spectacles. Referring to the drawings particularly to FIGS. 1 to 7, where similar reference characters denote corresponding features. Consistently throughout the figures, they are shown as preferred embodiments.
[0021] FIG. 1 illustrates a system view of an end user 102 capturing his or her image on an end user communicating device 104 and transmitting one or more end user images to a computing entity 108 through a network 106 or in the end user’s device according to an embodiment herein. The system includes the end user 102, the end user communicating device 104, the network 106, the computing entity 108 and a database 110.
In one embodiment, the model parameters of each of the spectacles are generated and uploaded online into a database 110. The end user communicating device 104 includes an end user interface module 112, a spectacle image and a data receiving module and an image superimposition module 120. The end user interface module 112 captures an image of the end user 102 and performs sizing operation on the image of the end user 102. The sizing operation involves inclusion of a standard sized object in the image of the end user 102. The dimensions of the standard sized object being uniform all across the world and known thing, provides a factor which relates the real-world measurements to addressable image measurements. In one embodiment, a standard sized object such as a plastic payment card, loyalty card or an identification card is included in the image of the end user 102 and the dimension of either of the sides (length or breadth) is found out or inputted in image units, which when compared against the standard sized object’s real-world size, gives a factor to relate the image dimensions with the real-world measurement system. In the sizing operation, two horizontal and two vertical (artificial) lines are visible to the end user 102. The end user 102 zooms in or out the image of the end user 102 until the standard card held at the same distance as a face of the end user 102, from the end user interface module 112 (e.g., a camera), perfectly aligns with the two horizontal and the two vertical (artificial) lines. By doing this, the end user 102 fixes and correlates the real-world size of standard card with length in pixels from a captured image of the end user 102, hence giving a correlation factor which will be further required to resize the spectacle accurately. In an embodiment, the zooming of the image of the end user 102 and placement of the horizontal and vertical (artificial) lines are automated in the software and hence correlating the real-world size of the card, with the length in pixels from the captured image. In another embodiment, the horizontal and vertical (artificial) lines are replaced by suitable markers to identify the periphery of the standard card, after which further adjustment could be done manually and hence correlating the real-world size of card, with length in pixels from the captured image of
the end user 102. Hence, an accurate pixel to real-world conversion factor is calculated and transferred to the computing entity 108 using the end user interface module 112. In one embodiment, the computing entity 108 can be a part of the end user’s device. In another embodiment, the computing entity can be a computing entity 108 which is connected to the end user through the network 106. The spectacle image and the data receiving module 114 receives superimposition parameters and one or more spectacle images from the computing entity 104. The image superimposition module 120 performs resizing operation of the spectacle based on the superimposition parameters received in the spectacle image and the data receiving module 114.
[0022] The image super imposition module 120 includes a spectacle transmissivity
obtaining module and final super imposed image obtaining module. The spectacle
transmissivity obtaining module that multiplies the transparency image of the spectacle with the side light entry image to obtain the final transparency image and then multiplies the final transparency image with the end user image to account for the transmissivity of the spectacle. The final super imposed image obtaining module 120 adds the reflection image of the spectacle with the transparency processed user image, to obtain a final superimposed image. The end user interface module 112 displays the final superimposed image of the end user 102. The images generated by the end user communicating device 104 is transmitted to the computing entity 108, and the images and a data generated by the computing entity is transmitted to the end user communicating device 104.
[0023] The computing entity 108 includes an image processing & computer vision module 116. In one embodiment, the image processing & computer vision module 116, as well as the superimposition module 120 are all present in the computing entity 108. In another embodiment, the superimposition module 120 is present in the end user communicating device 104 and the image processing & computer vision module 116 are present in the computing entity 108. In the superimposition module 120, a transparency image of the spectacle is multiplied with the image of the end user 102 to account for a spectacle transmissivity, i.e., the degree to which the spectacle allows light to pass through it. This helps us model the interaction of the end user's facial skin tone with the spectacle color. Further to that, the reflection image, which accounts for the reflectivity, i.e, the amount of light being reflected off the spectacle surface, is added to obtain a final superimposed image
as an output. In scenarios where the superimposition module is a part of the computing entity, the superimposed image of the end user is compressed, and sent to the end user communicating device 104, further to which the end user interface module 112 receives the superimposed image of the end user 102 with the spectacle to display.
[0024] FIG. 2 is an exploded view of a computing entity of FIG. 1 according to an
embodiment herein. The computing entity 108 includes an end user image receiving
module 202, a face and eyes detecting module 204, an Inter Pupillary Distance calculating module 206, a sizing module 208, a transparency image accessing module 210, a reflection image accessing module 212, a side light entry image fetching module 214, and a superimposition parameters and spectacle images transmitting module 216. The end user image receiving module 202 that (i) receives and (ii) uncompresses an image of the end user 102 received from an end user communicating device 104. The face and eyes detecting module 204 detects the face and the eyes of the end user 102 in the image of the end user 102. The Inter Pupillary Distance calculating module 206 calculates an inter pupillary distance and a point of superimposition of the spectacle on the image of the end user 102. The sizing module 208 calculates a resizing ratio for the spectacle on the image using pixel to real world conversion factors. The transparency image accessing module 210 accesses a transparency image of the spectacle that is stored in the database 110. The transparency image accessing module 210 accesses the transparency image in red (TR), green (TG), and blue (TB) channels of the spectacle that is stored in the database 110. The transparency image of the spectacle determines an amount of light that is allowed to be passed through the spectacle. The reflection image accessing module 212 accesses a reflection image of the spectacle that is stored in the database. The reflection image accessing module 210 accesses the reflection images in red (RR), green (RG), and blue (RB) channels of the spectacle that is stored in the database 110. The reflection image of the spectacle determines an amount of light that is reflected off the surface of the spectacle. The side light entry image fetching module 214 fetches a side light entry image that is stored in the database 110. The side light entry image fetching module 214 accesses the side light entry images in red (SR), green (SG), and blue (SB) channels of the spectacle that is stored in the database 110. The side light entry image accounts for the light entering from the sides of the spectacle. The superimposition parameters and spectacle images transmitting module 216 transmits parameters for
superimposition and one or more spectacle images to the end user communicating device 104.
[0001] FIG. 3 is a flow diagram illustrating a process of the image processing & computer vision modules in the computing entity 108 of FIG. 1 according to an embodiment herein. At step 302, a transparency image of a spectacle and a reflection image of a spectacle are accessed from the database 110. In one embodiment, the transparency image and the reflection image for a spectacle are captured using a spectacle parameter capture device in an offline process. The transparency image of the spectacle helps in determining the amount of light that is allowed to be passed through it. The transparency at every point in each channel can be associated with an opacity for each of the three channels R, G and B, or any such color space. In one embodiment, these transparency images in red (TR), green (TG), and blue (TB) of a spectacle is available as a single color image file in the database 110, with the R, G and B planes of the color image actually containing the transparency images TR, TG, and TB. In another embodiment, these transparency images TR, TG and TB are stored as individual images.
[0002] At step 304, a side light entry image is fetched from the database 110. In one embodiment, the side light entry image accounts for the light entering from the sides of the spectacle. The intensity values in the side light entry image is lower towards the spectacle stem and gradually increases towards the center of the spectacle. In one embodiment, the side light entry image is combined with the transparency image by a multiplication operation. In another embodiment, the side light entry image in red (SR), green (SG), and blue (SB) of a spectacle is available as a single color image file in the database 110, with the R, G and B planes of the color image. In another embodiment, these side light entry images SR, SG and SB are stored as individual images. At step 306, image planes (red, green, blue) of a side light entry image is multiplied with corresponding planes in the transparency image of the spectacle to obtain a final transparency image. At step 308, the image planes (red, green, blue) in the image of the end user is multiplied with the corresponding planes in the final transparency image of the spectacle to obtain a transparency processed user image. At step 310, reflection image (red (RR), green (RG) and blue (RB)) of the spectacle is added to the transparency processed image of the end user 102 to obtain a final superimposed image.
[0003] FIG. 4A and 4B are flow diagrams illustrating a process of capturing an end
user images computation of superimposition parameters using computer vision algorithms and the process of superimposition of the spectacle on a face of the end user of FIG. 1 according to an embodiment herein. At step 402 an image of an end user 102 is captured. At step 404, the image of the end user is compressing 102. At step 406, the compressed image of the end user 102 received using a computing entity 108. At step 408, the compressed image of the end user is uncompressed using a computing entity 108. At step 410, the face and the eyes of the end user is detected in the uncompressed image of the end user 102. At step 412, the Inter-Pupillary Distance and a point of superimposition of the spectacle is calculated on the uncompressed image of the end user 102. At step 414, resizing ratio of a spectacle image is calculated using the pixel to real world conversion factors. At step 416, the final computed parameters sent to an end user communicating device 104 for superimposition. The final computed parameters include point of a superimposition on the face, and an angle of inclination of the spectacle. At step 418, a transparency image of the spectacle is multiplied with an input image of the end user to account for the transmissivity of the spectacle. At step 420, a reflection image of the spectacle is added with a transparency processed user image to obtain the final superimposed image. At step 422, the final superimposed image is displayed on the end user communicating device 104. The superimposition algorithm is executed at the end user device, according to an embodiment herein. In an alternate approach, the superimposition algorithm can also be executed at the computing entity.
[0004] FIG. 5 is a flow diagram illustrating a process of generating a dynamic shadow based on the illumination on a face of the end user 102 according to an embodiment herein. A basic process involves making the shadow a part of the transparency image and blend it with the end user’s face along with the transparency image. At step 502, a boundary of the face of an end user 102 is detected to take illumination on the face of the end user 102 into account. At step 504, a position of the dominant light source is estimated using known methods from available research literature (e.g., Methods listed in the paper “A Survey of Light Source Detection Methods” by Nathan Funk, University of Alberta, 2003). At step 506, a position determined on the face for a shadow of the spectacle. At step 512, the shadow is drawn based on the thickness of a spectacle frame, and the dominant light source position.
[0005] FIG. 6 illustrates an exploded view of a typical receiver having a memory 602 having a set of computer instructions, a bus 604, a display 606, a speaker 608, and a processor 610 capable of processing a set of instructions to perform any one or more of the methodologies herein, according to an embodiment herein. In one embodiment, the receiver may be the end user communicating device 104. The processor 610 may also enable digital content to be consumed in the form of video for output via one or more displays 606 or audio for output via speaker and/or earphones 608. The processor 610 may also carry out the methods described herein and in accordance with the embodiments herein.
[0006] Digital content may also be stored in the memory or data storage 602 for future processing or consumption. The memory 602 may also store program specific information and/or service information (PSI/SI), including information about digital content (e.g., the detected information bits) available in the future or stored from the past. A user of the end user communicating device 104 may view this stored information on display 606 and select an item of for viewing, listening, or other uses via input, which may take the form of keypad, scroll, or other input device(s) or combinations thereof. When digital content is selected, the processor 610 may pass information. The content and PSI/SI may be passed among functions within the personal communication device using the bus 604.
[0007] The techniques provided by the embodiments herein may be implemented on an integrated circuit chip (not shown). The chip design is created in a graphical computer programming language and stored in a computer storage medium (such as a disk, tape, physical hard drive, or virtual hard drive such as in a storage access network). If the designer does not fabricate chips or the photolithographic masks used to fabricate chips, the designer transmits the resulting design by physical means (e.g., by providing a copy of the storage medium storing the design) or electronically (e.g., through the Internet) to such entities, directly or indirectly.
[0008] The stored design is then converted into the appropriate format (e.g., GDSII) for the fabrication of photolithographic masks, which typically include multiple copies of the chip design in question that are to be formed on a wafer. The photolithographic masks are utilized to define areas of the wafer (and/or the layers thereon) to be etched or otherwise processed.
[0009] The resulting integrated circuit chips can be distributed by the fabricator in
raw wafer form (that is, as a single wafer that has multiple unpackaged chips), as a bare die, or in a packaged form. In the latter case the chip is mounted in a single chip package (such as a plastic carrier, with leads that are affixed to a motherboard or other higher level carrier) or in a multichip package (such as a ceramic carrier that has either or both surface interconnections or buried interconnections). In any case the chip is then integrated with other chips, discrete circuit elements, and/or other signal processing devices as part of either (a) an intermediate product, such as a motherboard, or (b) an end product. The end product can be any product that includes integrated circuit chips, ranging from toys and other low-end applications to advanced computer products having a display, a keyboard or another input device, and a central processor.
[0010] The embodiments herein can take the form of, an entirely hardware embodiment, an entirely software embodiment or an embodiment including both hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. Furthermore, the embodiments herein can take the form of a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
[0011] The medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium. Examples of a computer-readable medium include a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random-access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk. Current examples of optical disks include compact disk - read only memory (CD-ROM), compact disk - read/write (CD-R/W) and DVD.
[0012] A data processing system suitable for storing and/or executing program code will include at least one processor coupled directly or indirectly to memory elements through a system bus. The memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary
storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.
[0013] Input/output (I/O) devices (including but not limited to keyboards, displays, pointing devices, remote controls, etc.) can be coupled to the system either directly or through intervening I/O controllers. Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.
[0014] A representative hardware environment for practicing the embodiments herein is depicted in FIG. 7. This schematic drawing illustrates a hardware configuration of an end user communicating device/a computing entity in accordance with the embodiments herein. The system comprises at least one processor or central processing unit (CPU) 10. The CPUs 10 are interconnected via system bus 12 to various devices such as a random-access memory (RAM) 14, read-only memory (ROM) 16, and an input/output (I/O) adapter 18. The I/O adapter 18 can connect to peripheral devices, such as disk units 11 and tape drives 13, or other program storage devices that are readable by the system. The system can read the inventive instructions on the program storage devices and follow these instructions to execute the methodology of the embodiments herein.
[0015] The system further includes a user interface adapter 19 that connects a keyboard 15, mouse 17, speaker 24, microphone 22, and/or other user interface devices such as a touch screen device (not shown) or a remote control to the bus 12 to gather user input. Additionally, a communication adapter 20 connects the bus 12 to a data processing network 25, and a display adapter 21 connects the bus 12 to a display device 23 which may be embodied as an output device such as a monitor, printer, or transmitter, for example.
[0016] The system and method to accurately visualize the appearance of an end user when wearing the spectacles. The embodiments herein achieve this by providing a technology that renders a good color reproduction of the transparency and reflection of the spectacle, taking into account the interaction of the end user's facial skin tone with the spectacle color.
[0025] The foregoing description of the specific embodiments will so fully reveal the
general nature of the embodiments herein that others can by applying current knowledge readily modify and/or adapt for various applications. Such specific embodiments without departing from the generic concept and therefore such adaptations and modifications should intend to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the appended claims.
CLAIMS I/We claim:
1. A system for virtual superimposition of a spectacle on a face of an end user (102), the
system comprising:
(a) a memory unit that stores a database (110), and a set of modules; and
(b) a processor that executes the set of modules, wherein the set of modules
comprises:
a computing entity (108) comprising
an end user image receiving module (202) that (i) receives and (ii) uncompresses an image of the end user (102) received from an end user communicating device (104);
a face and eyes detecting module (204) that detects the face and the eyes of the end user (102) in the image of the end user (102);
an Inter Pupillary Distance calculating module (206) that calculates an inter pupillary distance and a point of superimposition of the spectacle on the image of the end user (102);
a sizing module (208) that calculates a resizing ratio for the spectacle on the image using pixel to real world conversion factors;
a transparency image accessing module (210) that accesses a transparency image of the spectacle that is stored in the database (110), wherein the transparency image accessing module (210) accesses the transparency image in red (TR), green (TG), and blue (TB) channels of the spectacle that is stored in the database (110), wherein the transparency image of the spectacle determines an amount of light that is allowed to be passed through the spectacle;
a reflection image accessing module (212) that accesses a reflection image of the spectacle that is stored in the database (110), wherein the reflection image accessing module (212) accesses the reflection images in red (RR), green (RG), and blue (RB) channels of the spectacle that is stored in the database (110), wherein the reflection image of the spectacle determines an amount of light that is reflected off the surface of the spectacle;
a side light entry image fetching module (214) that fetches a side light entry image that is stored in the database (110), wherein the side light entry image fetching
module (214) accesses the side light entry images in red (SR), green (SG), and blue (SB) channels of the spectacle that is stored in the database (110), wherein the side light entry image accounts for the light entering from the sides of the spectacle; and
a superimposition parameters and spectacle images transmitting module (216) that transmits parameters for superimposition and one or more spectacle images to the end user communicating device (104).
2. The system as claimed in claim 1, wherein the end user communicating device (104)
comprises an end user interface device (112), an image superimposition module (120), a
spectacle image and a data receiving module (114), wherein the end user interface device
(112) captures the image of the end user (102), wherein the image superimposition module
(120) performs resizing operation of the spectacle based on the superimposition parameters
received in the spectacle image and data receiving module (114), wherein the sizing
operation comprises of superimposed markers with two horizontal or two vertical (artificial)
lines visible to the end user (102), wherein the end user (102) zooms in or out the image of
the end user (102) until the standard card held at the same distance as the face of the end user
(102), from the end user interface module (112), perfectly aligns with the two horizontal or
the two vertical (artificial) lines, wherein the
calculation of Pixel to Real World Factor of the end user (102) is performed in the Sizing Module (208), wherein an image of the end user with a standard card held at the same distance as the face of the end user (102), is processed to automatically detect the edges and corners of the standard card, wherein the sizing module (208), correlates the real-world size of a standard sized card with length in pixels from the captured image, of the end user (102), hence giving a correlation factor which will be further required to resize the spectacle accurately.
3. The system as claimed in claim 2, wherein the spectacle image and the data receiving
module (114) receives superimposition parameters and one or more spectacle images from
the computing entity (104).
4. The system as claimed in claim 1, wherein the image super imposition module
comprises
a spectacle transmissivity obtaining module that multiplies the transparency image of the spectacle with the side light entry image to obtain the final transparency image and then multiplies the final transparency image with the end user image to account for the transmissivity of the spectacle; and
a final super imposed image obtaining module that adds the reflection image of the spectacle with the transparency processed user image, to obtain a final superimposed image, wherein the end user interface module (112) displays the final superimposed image of the end user (102).
5. The system as claimed in claim 1, comprising a network, wherein the images generated by the end user communicating device (104) is transmitted to the computing entity (108), and the images and the data generated by the computing entity is transmitted to the end user communicating device (104).
6. A method of virtual superimposition of a spectacle on a face of an end user (102), the method comprising:
capturing an image of an end user (102);
compressing the image of the end user (102);
receiving, using a computing entity (108), the compressed image of the end user (102);
uncompressing the compressed image of the end user (102) using a computing entity (108);
detecting the face and the eyes of the end user in the uncompressed image of the end user (102);
calculating the Inter-Pupillary Distance and a point of superimposition of the spectacle on the uncompressed image of the end user (102);
calculating resizing ratio of a spectacle image using the pixel to real world conversion factors;
sending the final computed parameters to an end user communicating device (104) for superimposition, wherein the final computed parameters comprises a point of superimposition on the face, and an angle of inclination of the spectacle;
multiplying a transparency image of the spectacle with an input image of the end user (102) to account for the transmissivity of the spectacle;
adding a reflection image of the spectacle with the transparency processed user image to obtain a final superimposed image; and
displaying the final superimposed image on the end user communicating device (102).
| # | Name | Date |
|---|---|---|
| 1 | 201741017480-IntimationOfGrant15-12-2023.pdf | 2023-12-15 |
| 1 | PROOF OF RIGHT [18-05-2017(online)].pdf | 2017-05-18 |
| 2 | 201741017480-PatentCertificate15-12-2023.pdf | 2023-12-15 |
| 2 | Power of Attorney [18-05-2017(online)].pdf | 2017-05-18 |
| 3 | FORM28 [18-05-2017(online)].pdf_200.pdf | 2017-05-18 |
| 3 | 201741017480-Written submissions and relevant documents [14-11-2023(online)].pdf | 2023-11-14 |
| 4 | FORM28 [18-05-2017(online)].pdf | 2017-05-18 |
| 4 | 201741017480-Annexure [28-10-2023(online)].pdf | 2023-10-28 |
| 5 | Form 3 [18-05-2017(online)].pdf | 2017-05-18 |
| 5 | 201741017480-Correspondence to notify the Controller [28-10-2023(online)].pdf | 2023-10-28 |
| 6 | Form 1 [18-05-2017(online)].pdf | 2017-05-18 |
| 6 | 201741017480-FORM-26 [28-10-2023(online)].pdf | 2023-10-28 |
| 7 | EVIDENCE FOR SSI [18-05-2017(online)].pdf_201.pdf | 2017-05-18 |
| 7 | 201741017480-Correspondence to notify the Controller [16-10-2023(online)].pdf | 2023-10-16 |
| 8 | EVIDENCE FOR SSI [18-05-2017(online)].pdf | 2017-05-18 |
| 8 | 201741017480-US(14)-HearingNotice-(HearingDate-30-10-2023).pdf | 2023-09-22 |
| 9 | 201741017480-FER.pdf | 2021-10-17 |
| 9 | Drawing [18-05-2017(online)].pdf | 2017-05-18 |
| 10 | 201741017480-ABSTRACT [18-05-2021(online)].pdf | 2021-05-18 |
| 10 | Description(Provisional) [18-05-2017(online)].pdf | 2017-05-18 |
| 11 | 201741017480-CLAIMS [18-05-2021(online)].pdf | 2021-05-18 |
| 11 | Correspondence by Agent_Form1_Power Of Attorney_22-05-2017.pdf | 2017-05-22 |
| 12 | 201741017480-CORRESPONDENCE [18-05-2021(online)].pdf | 2021-05-18 |
| 12 | 201741017480-PostDating-(18-05-2018)-(E-6-111-2018-CHE).pdf | 2018-05-18 |
| 13 | 201741017480-APPLICATIONFORPOSTDATING [18-05-2018(online)].pdf | 2018-05-18 |
| 13 | 201741017480-DRAWING [18-05-2021(online)].pdf | 2021-05-18 |
| 14 | 201741017480-FER_SER_REPLY [18-05-2021(online)].pdf | 2021-05-18 |
| 14 | Form2 (Title Page)_Complete_18-07-2018.pdf | 2018-07-18 |
| 15 | 201741017480-DRAWING [18-07-2018(online)].pdf | 2018-07-18 |
| 15 | 201741017480-OTHERS [18-05-2021(online)].pdf | 2021-05-18 |
| 16 | 201741017480-CORRESPONDENCE-OTHERS [18-07-2018(online)].pdf | 2018-07-18 |
| 16 | 201741017480-FORM 4(ii) [18-02-2021(online)].pdf | 2021-02-18 |
| 17 | Correspondence by Agent_Form1, Power of Attorney_25-10-2018.pdf | 2018-10-25 |
| 17 | 201741017480-COMPLETE SPECIFICATION [18-07-2018(online)].pdf | 2018-07-18 |
| 18 | 201741017480-FORM-9 [23-07-2018(online)].pdf | 2018-07-23 |
| 18 | Correspondence by Agent_Form30_25-10-2018.pdf | 2018-10-25 |
| 19 | 201741017480-FORM 18 [23-07-2018(online)].pdf | 2018-07-23 |
| 19 | 201741017480-Proof of Right (MANDATORY) [11-10-2018(online)].pdf | 2018-10-11 |
| 20 | Correspondence by Agent_Power of Attorney_02-08-2018.pdf | 2018-08-02 |
| 21 | 201741017480-FORM 18 [23-07-2018(online)].pdf | 2018-07-23 |
| 21 | 201741017480-Proof of Right (MANDATORY) [11-10-2018(online)].pdf | 2018-10-11 |
| 22 | 201741017480-FORM-9 [23-07-2018(online)].pdf | 2018-07-23 |
| 22 | Correspondence by Agent_Form30_25-10-2018.pdf | 2018-10-25 |
| 23 | 201741017480-COMPLETE SPECIFICATION [18-07-2018(online)].pdf | 2018-07-18 |
| 23 | Correspondence by Agent_Form1, Power of Attorney_25-10-2018.pdf | 2018-10-25 |
| 24 | 201741017480-FORM 4(ii) [18-02-2021(online)].pdf | 2021-02-18 |
| 24 | 201741017480-CORRESPONDENCE-OTHERS [18-07-2018(online)].pdf | 2018-07-18 |
| 25 | 201741017480-OTHERS [18-05-2021(online)].pdf | 2021-05-18 |
| 25 | 201741017480-DRAWING [18-07-2018(online)].pdf | 2018-07-18 |
| 26 | 201741017480-FER_SER_REPLY [18-05-2021(online)].pdf | 2021-05-18 |
| 26 | Form2 (Title Page)_Complete_18-07-2018.pdf | 2018-07-18 |
| 27 | 201741017480-APPLICATIONFORPOSTDATING [18-05-2018(online)].pdf | 2018-05-18 |
| 27 | 201741017480-DRAWING [18-05-2021(online)].pdf | 2021-05-18 |
| 28 | 201741017480-CORRESPONDENCE [18-05-2021(online)].pdf | 2021-05-18 |
| 28 | 201741017480-PostDating-(18-05-2018)-(E-6-111-2018-CHE).pdf | 2018-05-18 |
| 29 | 201741017480-CLAIMS [18-05-2021(online)].pdf | 2021-05-18 |
| 29 | Correspondence by Agent_Form1_Power Of Attorney_22-05-2017.pdf | 2017-05-22 |
| 30 | 201741017480-ABSTRACT [18-05-2021(online)].pdf | 2021-05-18 |
| 30 | Description(Provisional) [18-05-2017(online)].pdf | 2017-05-18 |
| 31 | 201741017480-FER.pdf | 2021-10-17 |
| 31 | Drawing [18-05-2017(online)].pdf | 2017-05-18 |
| 32 | 201741017480-US(14)-HearingNotice-(HearingDate-30-10-2023).pdf | 2023-09-22 |
| 32 | EVIDENCE FOR SSI [18-05-2017(online)].pdf | 2017-05-18 |
| 33 | 201741017480-Correspondence to notify the Controller [16-10-2023(online)].pdf | 2023-10-16 |
| 33 | EVIDENCE FOR SSI [18-05-2017(online)].pdf_201.pdf | 2017-05-18 |
| 34 | 201741017480-FORM-26 [28-10-2023(online)].pdf | 2023-10-28 |
| 34 | Form 1 [18-05-2017(online)].pdf | 2017-05-18 |
| 35 | 201741017480-Correspondence to notify the Controller [28-10-2023(online)].pdf | 2023-10-28 |
| 35 | Form 3 [18-05-2017(online)].pdf | 2017-05-18 |
| 36 | 201741017480-Annexure [28-10-2023(online)].pdf | 2023-10-28 |
| 36 | FORM28 [18-05-2017(online)].pdf | 2017-05-18 |
| 37 | FORM28 [18-05-2017(online)].pdf_200.pdf | 2017-05-18 |
| 37 | 201741017480-Written submissions and relevant documents [14-11-2023(online)].pdf | 2023-11-14 |
| 38 | Power of Attorney [18-05-2017(online)].pdf | 2017-05-18 |
| 38 | 201741017480-PatentCertificate15-12-2023.pdf | 2023-12-15 |
| 39 | PROOF OF RIGHT [18-05-2017(online)].pdf | 2017-05-18 |
| 39 | 201741017480-IntimationOfGrant15-12-2023.pdf | 2023-12-15 |
| 1 | searchE_10-08-2020.pdf |