Sign In to Follow Application
View All Documents & Correspondence

System For Providing Real Time Virtual Try On Experience Of Wearing An Object And A Method Thereof

Abstract: A system (10) and a method for providing real-time virtual try-on experience to a user of wearing an object made of a selected fabric, are provided. The system (10) comprises a computing device (100) and a server (110). The computing device (100) and the server (110) are communicatively coupled via a communication network. The computing device (100) comprises an image capturing module (102), an image processing module (103), a processor (104), a communication module (109), a display unit (107), a user body model creation module (105), a fabric model creation module (106) and an augmented reality engine (108). The server (110) comprises an object creation module (112) and a database (111). The database (111) comprises a list of object models (113) and an object catalogue (114). The computing device (100) is configured to generate an image representing a real-time virtual view of a user wearing an object made of the selected fabric. Figure 1.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
06 October 2022
Publication Number
15/2024
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

PARMINDER SINGH
122 First floor, Western tower, Nijjer chowk, Western tower, Kharar, SAS Nagar (Mohali), Punjab – 140301, India

Inventors

1. PARMINDER SINGH
122 First floor, Western tower, Nijjer chowk, Western tower, Kharar, SAS Nagar (Mohali), Punjab – 140301, India

Specification

DESC:FIELD OF THE INVENTION
[0001] The present disclosure relates generally to a system and a method of providing a virtual try-on experience to a user, and more particularly to a system and a method for customizing an object model based on a selected fabric to be modelled or overlaid on a user body model.
BACKGROUND
[0002] The retail sale of clothes and garments is a massive industry all over the world. A traditional retail model entails selling garments from a shop or store where members of the public can come in, try on a variety of garments, and then purchase their chosen garment(s). With an increased pace of modern life and consumer demand for more retail choices, regardless of geographical location or availability, online clothing shopping has grown in popularity.
[0003] There are numerous online retailers and shopping applications that allow users to view how the garments look on a body of a model before purchasing them. Images of models wearing the garments are frequently provided to give an idea of how they will appear in real. However, this can be of limited use because the models used in such images are frequently of a single body type, or there may be a small number of "body types" to choose from in order to more closely, resemble the consumer. However, the options available will be limited and thus, the available body models are different from the actual body types of the consumers using the online retail service.
[0004] The online retailers may also offer returns to the customers who purchased an item and not satisfied with the color, fit, shape, size or quality of the cloth of an article. However, this procedure consumes extra time for both the customers and the online retailers. Also, higher return rates cause monetary loss to the retailers too. Women, for example, frequently order the same dress in two sizes and try them both on at home, returning the one that doesn't fit as well as trying on at home to see how it fits and drapes. This 50% return rate can cost up to 10-25% of online retailer revenues, equating to billions of dollars lost globally as retailers have unwanted goods shipped back, repackaged, and restocked for resale. As a result, in addition to the original lost sale, the retailer suffers from the additional resources, time, and expense involved in the return process. This is a significant issue for online retailers.
[0005] Various approaches have been taken to solve these problems like providing customers/online users to choose a 3D character from a list of 3D characters that may resemble the body type of a user. However, still the resemblance will not be higher than 60-70% as the 3D characters are limited and can’t cover a large variety of bod shapes and structures. Also, the 3D characters do not give real life like experience to user to check how the clothing will look on the user in real-life. Some prior arts also provide options to manipulate the 3D characters structure as per their wish. However, it’s a time-consuming process and will require technical expertise and hence not suitable for each and every customer of the online retailers.
[0006] Also, there is need to provide a simulating environment where a customer can simulate the clothing try on experience based on his fabric preference, i.e., which type of fabric a user want to use. As the online retailers provide multiple options for choosing a color, fabric type from a predefined list of options available in the stock. However, there is not any option for a user to provide a specific fabric type for a particular design of a clothing and virtually try it on at the same time.
[0007] The present disclosure solves the above-mentioned problems by providing a real-time virtual try on experience to the user, where the user can provide his fabric choice in real-time, and the method can provide a virtual try-on experience by overlaying a selected object made up of chosen fabric on the user’s body type model. Accordingly, the user can actually see how the particular dress woven in selected fabric will look on him/her.

OBJECTIVE OF THE DISCLOSURE
[0008] An objective of the present disclosure is to ameliorate limitations of the existing prior arts in providing virtual try on experience.
[0009] An objective of the present disclosure is to provide a simulating environment where a customer or a user can simulate the clothing try on experience based on fabric preference of the user.
[0010] Another objective of the present disclosure is to enable a user to provide fabric of choice in real-time for virtual try on.
[0011] Yet another objective of the present disclosure is to provide a real time picture or image of the fabric woven into a desired outfit.
[0012] Yet another objective of the present disclosure is to generate and display a real time picture or image of the user with a desired outfit or an object made up of chosen fabric.
SUMMARY OF THE DISCLOSURE
[0013] The present disclosure discloses a system and a method for providing real-time virtual try-on experience of wearing an object made of a selected fabric. The system comprises a computing device, and a server that is communicatively connected with the computing device. The computing device further comprises a memory, an image capturing module, a processor, an image processing module, a fabric model creation module, a user body model creation module, and an augmented reality (AR) engine. The memory of the computing device is configured to store a set of instructions, a plurality of body models, and a plurality of fabric images. The processor is configured to execute the set of instructions to provide the real-time virtual try-on experience to a user. The image capturing module is configured to capture at least one of an image of the user or a fabric of interest in real-time. The image processing module is configured to determine whether an object captured in the captured image is user’s image or fabric image based on the image recognition algorithm stored in the memory.
[0014] The processor is configured to generate, using the fabric model creation module, a fabric model based on at least one image of a fabric of interest, generate, using the fabric model creation module, a fabric layer based on the generated fabric model, obtain, from the server, at least one object model corresponding to at least one object, generate, using the augmented reality (AR) engine, a superimposed model by overlaying the at least one object model on a user body model and generate, using the AR engine, an image representing a real-time virtual view of a user wearing an object made of the selected fabric by overlaying the fabric layer on the superimposed model.
[0015] In some embodiments, the processor is further configured to display, on a display unit of the computing device, the image representing the real-time virtual view of the user wearing the object made of the selected fabric.
[0016] In some embodiments, the user body model creation module is configured to generate the user body model based on at least one of a captured image of the user or a first user input. The user body model may be created by determining at least one of skeletal information of the user, or user characteristics comprising one or more index or articulation points, user dimension, user's overlying musculature and surface features, filling in the user's outer dimensions based on at least one of the captured image of the user or the first user input, and generating the user body model based on the at least one of skeletal information of the user, or the user characteristics. In some embodiments, the user body model may be created by obtaining a first body model from the plurality of body models and fixing an image of a user face on the first body model of the plurality of body models.
[0017] In some embodiments, the fabric model creation module is configured to create the fabric model associated with the fabric captured in the captured image, by determining a plurality of characteristics associated with the fabric including a texture, thickness, pattern, color, stretch, stiffness, drape, and other related characteristics.
[0018] In some embodiments, the augmented reality (AR) engine is configured to generate the superimposed model by overlaying the at least one object model on the user body model by matching articulation points of the user body model to corresponding articulation points of the object model. The processor is configured to obtain from the server, the at least one object model corresponding to at least one object.
[0019] In some embodiments, the augmented reality (AR) engine is configured to generate the image representing a real-time virtual view of a user wearing an object made of the selected fabric by comparing the dimensions of the fabric layer with dimensions of the superimposed model and adjusting the dimensions of the fabric layer based on the dimensions of the superimposed model for overlaying the fabric layer on the superimposed model. The server is configured to create an object model corresponding to an object using an object creation module.
[0020] In some embodiments, the processor is further configured to update the user body model based on the dynamic movements of the user in a three-dimensional environment and update the object model overlaying the user body model based on the dynamic the movements of the user.
[0021] In another aspect, a method for providing real-time virtual try-on experience of wearing an object made of a selected fabric is provided. The method comprises steps of: capturing, using the image capturing module, an image of a user or a fabric of interest in real-time, generating, using a fabric model creation module, a fabric model based on at least one image of a fabric of interest, generating, using a fabric model creation module, a fabric layer based on the generated fabric model, obtaining, from the server, at least one object model corresponding to at least one object, generating, using the augmented reality (AR) engine, a superimposed model by overlaying the at least one object model on a user body model, and generating, using the AR engine, an image representing a real-time virtual view of a user wearing an object made of the selected fabric by overlaying the fabric layer on the superimposed model; and displaying, using the display unit, a real-time virtual view of the user wearing the object made of the selected fabric.
BRIEF DESCRIPTION OF THE DRAWINGS
[0022] The foregoing and other features of this disclosure will become more fully apparent from the following description and appended claims, taken in conjunction with the accompanying drawings. Understanding that these drawings depict only several embodiments in accordance with the disclosure and are, therefore, not to be considered limiting of its scope, the disclosure will be described with additional specificity and detail through use of the accompanying drawings, in which:
[0023] Fig. 1 illustrates an exemplary block diagram of a system for providing real-time virtual try-on experience in accordance with the present disclosure.
[0024] Fig. 2 illustrates an exemplary flow chart for a method for providing real-time virtual try-on experience in accordance with the present disclosure.
[0025] Fig. 3 illustrates an exemplary flow chart of a method of creating a fabric model based on an image of a fabric captured by a user in accordance with the present disclosure.
[0026] Fig. 4 illustrates an exemplary object model outlined with articulation points in accordance with the present disclosure.
[0027] Fig. 5 illustrates an exemplary user body model outlined with articulation points in accordance with the present disclosure.
[0028] Fig. 6 illustrates an exemplary superimposed model formed by overlaying the object model on the user body model in accordance with the present disclosure.
LIST OF REFERENCE NUMERALS
10 – System
100 – Computing device
101 – Memory
102 – Image capturing module
103 – Image processing module
104 – Processor
105 – User body model creation module
106 – Fabric model creation module
107 – Display unit
108 – Augmented reality (AR) engine
109 – Communication module
110 – Server
111 – Database
112 – Object creation module
113 – List of object models
114 – Object catalogue
400 – Object model
500 – User body model
600 – Superimposed model
DETAILED DESCRIPTION
[0029] Embodiments of the present invention are best understood by reference to the figures and description set forth herein. All the aspects of the embodiments described herein will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings. It should be understood, however, that the following descriptions, while indicating preferred embodiments and numerous specific details thereof, are given by way of illustration and not of limitation. Many changes and modifications may be made within the scope of the embodiments herein without departing from the spirit and scope thereof, and the embodiments herein include all such modifications.
[0030] As used herein, the term ‘exemplary’ or ‘illustrative’ means ‘serving as an example, instance, or illustration.’ Any implementation described herein as exemplary or illustrative is not necessarily to be construed as advantageous and/or preferred over other embodiments. Unless the context requires otherwise, throughout the description and the claims, the word ‘comprise’ and variations thereof, such as ‘comprises’ and ‘comprising’ are to be construed in an open, inclusive sense, i.e., as ‘including, but not limited to.’
[0031] This disclosure is generally drawn, inter alia, to methods, apparatuses, systems, devices implemented as tools for implementation of object models to be superimposed on a user body model in a virtual environment.
[0032] The present disclosure enables computing devices to efficiently determine object and/or user body models and use these models to render articles or objects, such as clothing, on image or video of the users. For example, in augmented reality, the technology may render clothing in real time over a video or image of a user. The user may include, for example, but not limited to a customer, a buyer, a purchaser, a shopper or the like. The user may also have option to implement the disclosed invention based on a particular fabric whose information is provided by the user to the computing device.
[0033] Methodologies for remote visualization of how clothing, apparel, or other elements would appear on a consumer are included in the technology described herein. For example, a consumer may choose an item of clothing to "try on" with augmented reality. Body mapping algorithms may be used to determine a skeletal model and then add muscle, skin, and other body dimensions to generate a three-dimensional model of the consumer. In one aspect of the invention, the three-dimensional model of the consumer seems a real-like virtual image of the user. The technology may generate a three-dimensional model of an article of clothing using measurements and fabric information received from the manufacturer or retailer. In an alternative embodiment of the invention, the fabric information can be retrieved from a fabric model which is created when an image of a fabric is captured. The technology may then overlay the item of clothing on the user, allowing the user to see a realistic rendering of himself or herself wearing the item of clothing.
[0034] The present disclosure provides a system 10 and a method for providing real-time virtual try-on experience of wearing an object made of a selected fabric. The system 10 and method of the present disclosure use artificial intelligence techniques to get fabric of interest worn on the user selected body model superimposed with an object model of clothing to provide real time view of the outfit within minutes or seconds. Because of the use of such technique, the system 10 and method of the present disclosure enables selection of the fabric in real-time and visualize virtually how the selected fabric suits to the user.
[0035] Fig. 1 illustrates an exemplary block diagram of a system 10 for providing real-time virtual try-on experience to a user in accordance with the present disclosure. The system 10 comprises a computing device 100, and a server 110. The computing device 100 is communicatively coupled to the server 110 via a communication network. The computing device 100 further comprises a memory 101, an image capturing module 102, a processor 103, one or more image processing modules 104, a fabric model creation module 105, a user body model creation module 106, and an augmented reality (AR) engine 107, thereby generation of an image representing a real-time virtual view of a user wearing an object made of the selected fabric being performed in the server 110. The computing device 100 is communicatively coupled to the server 110 via a communication network. The memory 101 is configured to store a set of instructions, a plurality of body models, and a plurality of fabric images. The computing device 100 may be a mobile phone, a kindle, a PDA (Personal Digital Assistant), a tablet, a music player, a computer, an electronic notebook, or a smartphone. As the computing device 100 performs
[0036] In some alternate embodiments, the server 110 comprises the image processing module 103, the user body model creation module 105, the fabric model creation module 106, the augmented reality (AR) engine 108, the object creation module 112 and the database 111, thereby generation of the image representing a real-time virtual view of a user wearing an object made of the selected fabric being performed in the server 110.
[0037] The image capturing module 102 is configured to capture at least one of an image of the user or a fabric of interest in real-time. The image capturing module 102 includes, but not limited to, a camera, a lidar sensor, a time-of-flight sensor, a dual camera sensor, or any other sensor which is used to capture a user’s or a fabric image. In one of the implementations, the image capturing module 102 is further configured to scan a body of a user and determine the characteristics of the user body. The characteristics include for example, but not limited to, dimensions of the body, skin color, contour of the body, shape of the body or the like.
[0038] The image processing module 103 is configured to determine whether an object captured in the captured image is the user’s image or the fabric image using an image recognition algorithm stored in the memory 101. Alternatively, other suitable algorithms may be used to recognize the object.
[0039] The user body model creation module 105 is configured to generate a user body model 500 by determining at least one of skeletal information of the user, or user characteristics comprising one or more index or articulation points, user dimension, user's overlying musculature and surface features, filling in the user's outer dimensions based on at least one of the captured image of the user or a first user input, and generates the user body model 500 based on the at least one of skeletal information of the user or characteristics. The user dimension includes, but not limited to, a shoulder width, an arm length, and so on, using the image data and a known size reference (such as the user's height, the user's pupils, environmental references, and so on. The user body model 500 may be a 2-dimensional (2D) or a 3-dimensional (3D) model. In some implementations, the first user input includes, but not limited to, a clothing size, a waist measurement, weight, a shoulder measurement, or a height. In some embodiments, the user body model 500 may be created by obtaining a first body model from the plurality of body models and fixing an image of a user face on the first body model of the plurality of body models. The user face may be cropped from the captured image of the user. The first body model may be similar to a body type of the user which may be selected based on the first user input or determined at least one of skeletal information of the user, or user characteristics. If a first body model does not similar to body type of the user, second body model is selected for generating the user body model 500. In an embodiment, the system 10 may employ alternate methods for generating the user body model 500 for the user. In some implementations, the user body model 500 dimensions may be measured using relative measurements. For example, such as if a user is holding a credit card in her hand. By knowing predefined measurements of the credit card, the user body model creation tool 105 can determine the relative measurements of the user body by taking credit card as a reference measurement. In some implementations, the user body model creation tool 105 may use image analysis to identify an outline or outer dimensions of a user. Silhouette techniques can be used to create outlines of the outer dimension of the user.
[0040] The fabric model creation module 106 is configured to create a fabric model associated with the fabric in the captured image, by determining a plurality of characteristics associated with the fabric including a texture, thickness, pattern, color, stretch, stiffness, drape, and other related characteristics. In some cases, the fabric model creation module 106 may use size and/or fabric data to simulate fabric stretch, stiffness, drape, color, thickness, and so on. The fabric model creation module 106 is configured to generate a fabric layer based on a generated fabric model. The fabric layer mimics a plurality of characteristics of the fabric of interest. In general terms, the fabric model creation module 106 creates a graphical layer interpretation of a fabric that mimics the characteristics of the fabric. In some embodiments, the image of the fabric of interest may be obtained from the plurality of fabric images stored in the memory 101. In some embodiments, the image of the fabric of interest is captured using the image capturing module 102 in real time.
[0041] The augmented reality (AR) engine 108 is configured to create a superimposed model 600 by overlaying an object model 400 on the user body model 500. As the computing device 100 is communicatively coupled to the server 110, the computing device 100 is configured to obtain the object model 400 from the server 110. The server 110 may be associated with an entity. The entity may be a manufacturer, a reseller, a store owner, a retailer, a distributor, or any third-party entity who is providing services to enable the computing device 100 to implement the virtual try-on method. The server 110 may further comprise an object creation module 112 and a database 111. The object creation module 112 is used to create object models 400. The database 111 may further comprise an object catalogue 114 which consist of various objects that may be available at the entity. Further, the database 111 consists of a list of object models 113. The object models 400 can be used by an AR engine 108 to create a superimposed model 600 of user body model 500 and an object model 400.
[0042] In some embodiments, the user accesses the entity using the computing device 100 via the server 110 for selecting fabrics/clothes for the object model 400. Based on the selected fabrics/clothes from the entity, the fabric layer may be generated. Further, the entity also publishes the purchase link for the fabrics/clothes. The user can also buy the fabrics/clothes using the purchase link of the entity.
[0043] In some cases, the object creation module 112 may use information (such as that provided by a manufacturer and/or retailer) to generate a three-dimensional (3-D) object model 400 of a piece of clothing. The object models 400 can be stored in a database 111 for later use. Further the object model 400 consists of various articulation points as shown in Fig. 4, the object model 400 may include articulation of the article of clothing, for example a full-sleeve t-shirt.
[0044] The articulation point can be a point at which an object bends, stretches, or hangs. The articulation or index points of the object model 400 may be determined automatically by the object creation module 112 or manually entered by a user, such as an administrative user at a retailer or manufacturer or a consumer, depending on the implementation. For example, based on a photograph of t-shirt, the object creation module 112 may automatically identify and/or assign articulation/index points where the t-shirt would typically bend, stretch, or move. Alternatively, the object articulation points may indicate points on the object model 400, for example, an articulation point representing an elbow on an object model 400 can be mapped and/or overlaid with an articulation point representing an elbow point of the user body model 500. Similarly, other articulation points of the object model 400 can be mapped with their corresponding articulation points of the user body model 500. In some cases, the object creation module 112 may receive several images or a video of an object and determine the articulation points based on the articulation of the object in the images/video. In some implementations, a user may manually select articulation and/or index points using a graphical user interface provided by or in conjunction with the object creation module 112.
[0045] The articulation points of the object model 400 help in mapping of the object over the user body model 500, as shown in Fig. 5, by matching the articulation points of the user body model 500 to the corresponding articulation points of the object model 400 (i.e., clothing). For example, the articulation points 401, 402, 403, 404 of object model 400 as shown in Fig. 4 will be mapped with the articulation points 501, 502, 503, 504 of the user models as shown in Fig. 5.
[0046] The computing device 100 is further configured to overlay the fabric layer on the superimposed model 600 to generate an image representing a real-time virtual view of a user wearing an object made of the selected fabric. The generated image may be a 3D image or a 2D image. Accordingly, the augmented reality (AR) engine 108 performs two necessary functions. First, it maps the object model 400 to the user body model 500 to create the superimposed model 600. Then, in the second process, it overlays the graphic layer of the fabric created (i.e., fabric layer) using the fabric model creation module 106 over the superimposed model 600. In some embodiments, the AR engine 108 is further configured to modify or change color of the fabric layer based on the user input on real time basis. For example, if the user wishes to try different color of the same fabric, the AR engine 108 changes color on the fabric layer overlaid on the superimposed model 600.
[0047] The computing device 100 further comprises a display unit 107 which presents to the user an augmented view of the user wearing the fabric. In other words, the computing device 100 displays the image representing a real-time virtual view of a user wearing an object made of the selected fabric, thereby the user may see a realistic rendering of himself or herself wearing the item of clothing.
[0048] Now referring to Fig. 2, which illustrates an exemplary flow chart 200 representing a method providing real-time virtual try-on experience to the user. in accordance with the present disclosure. At step S201, an image of a fabric is captured in real-time using one or more image capturing module 102. The one or more image capturing module 102 may include, for example, but not limited to, a camera, infrared sensor, lidar or the like. An image processing module 103 can process the image using an algorithm to extract out information related to a fabric and its fabric material and send this information to the fabric model creation module 106 to create the fabric model of the fabric. In some embodiments, the user may select the image of the fabric from the plurality of fabric images that are stored in the memory 101. The fabric model may include material property information, such as the stretch of the fabric, stiffness of the fabric, drape of the fabric, and so on, which may help improve the simulated fit of an article of clothing. The method then moves to step S202 where an image of a user is captured by the one or more image capturing module 102. The one or more image capturing module 102 can be the same one or more image capturing module 102 which has captured the image of the fabric. The one or more image capturing module 102 may have extra modules to capture the dimensions of a user body from front, back, top and bottom to create a 2-D/3-D image of the user body. A user body model creation module 105 uses the dimensions captured by the one or more modules of the one or more image capturing module 102 to create the user body model 500. Alternatively, a user may have previously accessed the web application or the website, and the website may already have a user body model 500 which is prestored in the memory 101 of the computing device 100 or at the server 110 (not shown). In some embodiments, the user body model 500 is generated by obtaining the first body model from the plurality of body models and fixing an image of a user face on the first body model. The first body model may be similar to the body type of the user. At step S203, the user accesses a database 111 consisting of a list of object models 113 by using a web application. The object models 400 may include models related to an object i.e., an article of clothing or a type of clothing for example, a dress, a suit, a blazer, a shirt, or any other type of clothing. Similarly, the object models 400 may include models related to accessories for example, but not limited to, such as sunglasses, eyeglasses, spectacles, watch, bands, or the like. A user may use the virtual try on method to check how a particular article of clothing or an accessory will look on the user’s body. At step S204, the user selects the object model 400 corresponding to an article of clothing or accessory from a list of the object models 113 stored in the database 111. Alternatively, a user may select an object i.e., an article of clothing from an object catalogue 114 presented on the web application or website. After that, a server 110 may retrieve the object model 400 corresponding to the selected object. In an embodiment, the user may select one or more objects from the object catalogue 114. The selection may also be performed by the computing device 100 implementing the virtual try on method to find a best fit for the user. In other words, the computing device 100 or the AR engine 108 is configured to determine the least one object model (400) that matches the user based on user dimension. At step S205, the object model 400 is overlaid on the user body model 500 to create a resulting superimposed model 600 (as shown in Fig. 6), by using an augmented reality (AR) engine 108. The object model 400 is superimposed in a way such that the articulation points 401, 402, 403, 404 of the object model 400 maps and overlay with the corresponding articulation points 501, 502, 503, 504 of the user body model 500. For example, the object model 400 such as an object model 400 representing a cap is superimposed and/or adjusted on the user body model 500 so that the cap should cover an area of a user body model head for which the cap is designed based on articulation points of the cap mapped with the articulation points of the temple/head area of the user body model 500.
[0049] In some embodiments, the user body model 500 that are prestored in the memory 101 of the computing device 100 or the server 110 (not shown) may initially require the steps of registration, for storing a corresponding user body model 500 of the user. The user may further access the user body model 500 that are prestored either in the memory 101 of the computing device 100 or in the database 111 of the server 110, by first authenticating the user. The authentication of the user requires input of the passcode by the user that is provided at the time of registration.
[0050] In another embodiment, the user may select any prestored image or a model from the memory 101 of the computing device 100 as the user body model 500.
[0051] In yet another embodiment, the user may select any prestored image from the memory 101 of the computing device 100 as the fabric model.
[0052] In yet another embodiment, the user may select one or more prestored images or capture real-time images using the one or more image capturing module 102 of the computing device 100, to be given as inputs to the one or more fabric models to be used for the object models 400.
[0053] Fig. 6 illustrates an exemplary embodiment where the object model 400 of Fig. 4 is overlaid on the user body model 500 of Fig. 5. As an example, the articulation points 401, 402, 403, 404 of an object model 400 as shown in Fig. 4 will be mapped with the articulation points 501, 502, 503, 504 of the user body models 500 as shown in Fig. 5 to create a superimposed model 600 as shown in Fig. 6. The articulation points mapping is represented by points 601, 602, 603 and 604 in the Fig. 6.
[0054] In an embodiment, the superimposed model 600 may move in real time with the movement of the user, articulating at various articulation points. The technology may move the superimposed model 600 with the user as the user rotates his or her body, articulates a joint, or otherwise moves, for example, based on changes to the user model. In some implementations, the fabric model may include material property information, such as the stretch of the fabric, stiffness of the fabric, drape of the fabric, and so on, which may help improve the simulated fit of the article of clothing.
[0055] At step S206, a fabric layer is created as per the fabric model created at the step S201. The fabric layer mimics a plurality of characteristics of a fabric in a detailed manner. The characteristics may comprise stretch, stiffness, drape, color, thickness, size, texture and other related characteristics of a fabric. Based on these characteristics, the fabric layer can be stretched, elongated, and/or overlaid around the superimposed model 600 to present a view of the user wearing the object. The extent of the stretching, elongation or overlaying of the fabric layer can be based on the dimensions and articulation points of a user body as per the user body model 500. The AR engine 108 compares the dimension of the fabric layer with the superimposed model 600 dimensions and adjust the dimensions of the fabric layer accordingly. At step S207, a real-time view of the user wearing an object made of a selected fabric is rendered by a display and presented to the user in real-time. In an embodiment, the method of virtual try-on comprises capturing the image of a fabric in real-time for processing along with the object model 400 for creating a real time view of the user wearing the fabric.
[0056] In an embodiment, the resulting superimposed model 600 is applied or overlaid with the fabric layer created as per the fabric model using the fabric model creation module 106 to present a resulting image on the display unit 107 of the user wearing the fabric in real-time.
[0057] As mentioned in step S204, a user selects an object from a set of objects. For an instance, a web application or a website of an entity such as a retailer or a manufacturing entity or a store owner may be accessed by a user to select an object from a set of objects. For example, the application or the website may show the user a graphical user interface with multiple objects (e.g., graphical images) and/or representations of the object models 400 (e.g., in a grid, scrollable list, etc.). The multiple objects can be fetched from a database 111. The user may choose an object from the list to render or overlay on the user's image (e.g., over a user model). The model AR engine may detect visual or audio cues to switch between object models 400 in some cases.
[0058] In an embodiment, the AR engine 108 may determine a second object model with one or more different dimensions than the first model. For example, the second object could be a different piece of clothing or a larger size of the same piece of clothing. The AR engine 108 may determine a dimension of the user based on the user model (and/or a received or determined dimension such as normal sizing information) and then automatically determine which object model best matches the user (e.g., closer, larger than, within a threshold range, etc.). The AR engine 108 may automatically display the recommended object model 400 and/or recommend a specific size or object based on this matching.
[0059] Fig. 3 illustrates an exemplary flow chart 300 illustrating the method of creating a fabric model based on an image of a fabric captured by a user. At step S301, a user captures an image of a fabric. It is desirable that the image should be of high quality so that it can be processed later for better results. To ensure this, an image acceptance criterion can be defined. The image acceptance criteria may include one or more parameters, for example, but not limited to lighting conditions, exposure level, image noise, focus, and other related parameters. Each parameter corresponds to a threshold value that should meet so that the captured image can be considered as an accepted image. At step S302, the fabric model creation module 106 checks whether the captured image satisfies the image acceptance criteria parameters. If yes, the image is accepted and further processed. Otherwise, the image of the fabric needs to be recaptured. Once the image is accepted, the process moves to the S303 step. At S303 step, the accepted image is further processed to enhance the quality of the image. The quality of the image is enhanced using various techniques which may include the use of machine learning and artificial intelligence. Further, the machine-learning and artificial techniques can be used to determine characteristics of the fabric. The characteristics may include, for example, but not limited to, stretch of the fabric, stiffness of the fabric, drape of the fabric, color of the fabric, thickness of the fabric, type of the fabric and other related characteristics. Alternatively, other methods can also be used to determine the characteristics of the fabric. Then, at step S304, a three-dimensional model of the fabric is created based on the captured image data and characteristics of the fabric. The three-dimension model of the fabric can be used to create and overlay a fabric layer over the superimposed model 600 resulting from the superimposing of the object model 400 and the user body model 500.
[0060] In an embodiment, the characteristic of the fabric includes one of a soft fabric, smooth fabric, rough fabric, light fabric, heavy fabric, warm fabric, luminating fabric, radiant fabric, shining fabric or a combination thereof. Further, the characteristic of the fabric also includes the material of the fabric for e.g., cotton, a mixed, georgette, canvas, leather, crepe, lace, gingham, polyester, silk, velvet, viscose etc.
[0061] In another embodiment, an augmented reality (AR) engine 108 is operable to perform a process of tracking a user’s position, location, orientation, posture and other related dynamic movements of a user. The tracking process may be performed by the AR engine 108 with the help of one or more image capturing module 102 and/or an image processing module 103. A tracking data is generated continuously by the AR engine 108 in real-time. The tracking data may comprise tracking information of movement of the user in an XYZ axis space of a three-dimensional environment. The user body model 500 associated with the user may be updated to follow the movement of user in the three-dimensional environment. Similarly, an object model 400 overlaying the user body model 500 may also get updated to follow the movements.
[0062] In an embodiment, the user may select from the web application a number of movement types for visualizing the movements on the superimposed model 600. The movement types may be any one of a walking, running, dancing, jumping, whirling and a combination thereof. Further, the movement of the superimposed model 600 can also be generated independently for every body part based on the inputs provided by the user.
[0063] In another embodiment, the system 10 of virtual try-on can be used in one of a streets, shops, malls, companies, boutiques etc.
[0064] In yet another embodiment, the system 10 of virtual try-on is implemented on mobile phones, big screens in shops or malls, desktop applications, as smart mirrors in boutiques, digital signages etc.
[0065] In yet another, the system 10 of virtual try-on includes the subscription of famous designers. The user may also subscribe for the designers for accessing the object models 400 of the famous designers, with the subscription charges applied.
[0066] While the above description contains specific details regarding certain elements, embodiments, and other teachings, it is understood that embodiments of the disclosure or any combination of them may be practiced without these specific details. These details should not be construed as limitations on the scope of any embodiment, but merely as exemplifications of the presently preferred embodiments. In other instances, well known structures, elements, and techniques have not been shown to clearly explain the details of the disclosure.
,CLAIMS:1. A system (10) for providing real-time virtual try-on experience of wearing an object made of a selected fabric, the system (10) comprising:
a server (110); and
a computing device (100) comprising:
a memory (101) configured to store a set of instructions, a plurality of body models, and a plurality of fabric images;
an image capturing module (102) configured to capture at least one of an image of a user or a fabric of interest in real-time;
a processor (104) configured to execute the set of instructions to
generate, using a fabric model creation module (106), a fabric model based on at least one image of the fabric of interest;
generate, using an augmented reality (AR) engine (108), a fabric layer based on the generated fabric model, wherein the fabric layer mimics a plurality of characteristics of the fabric of interest;
obtain, from the server (110), at least one object model (400) corresponding to at least one object;
generate, using the augmented reality (AR) engine (108), a superimposed model (600) by overlaying the at least one object model (400) on a user body model (500); and
generate, using the AR engine (108), an image representing a real-time virtual view of a user wearing an object made of the selected fabric by overlaying the fabric layer on the superimposed model (600).
2. The system (10) as claimed in claim 1, wherein the processor (104) is further configured to display, on a display unit (107) of the computing device (100), the image representing the real-time virtual view of the user wearing the object made of the selected fabric.
3. The system (10) as claimed in claim 1, wherein the processor (104) is further configured to generate, using a user body model creation module (105), the user body model (500) based on at least one of a captured image of the user or a first user input.
4. The system (10) as claimed in claim 3, wherein the user body model (500) is generated by determining at least one of skeletal information of the user, or user characteristics comprising one or more index or articulation points, user dimension, user's overlying musculature and surface features, filling in the user's outer dimensions based on the at least one of the captured image of the user or the first user input, and generating the user body model (500) based on the at least one of skeletal information of the user, or the user characteristics.
5. The system (10) as claimed in claim 1, wherein the processor (104) is further configured to generate, using a user body model creation module (105), the user body model (500) by obtaining a first body model from the plurality of body models and fixing an image of a user face on the first body model of the plurality of body models.
6. The system (10) as claimed in claim 1, wherein the server (110) is configured to generate, using the object creation module (112), the at least one object model (400) based on the at least one object, wherein the object creation module (112) identifies articulation points of the object model (400) based on the at least one object.
7. The system (10) as claimed in claim 6, wherein the processor (104) is configured to overlay the at least one object model (400) on the user body model (500) by matching articulation points of the user body model (500) to corresponding articulation points of the object model (400).
8. The system (10) as claimed in claim 1, wherein the AR engine (108) is configured to compare dimensions of the fabric layer with dimensions of the superimposed model (600) and adjusting the dimensions of the fabric layer based on the dimensions of the superimposed model (600) for overlaying the fabric layer on the superimposed model (600).
9. The system (10) as claimed in claim 1, wherein the AR engine (108) is configured to facilitate color change of the fabric layer based on the user input on real time basis.
10. The system (10) as claimed in claim 1, wherein the AR engine (108) is configured to determine the least one object model (400) that matches the user based on user dimension.
11. /The system (10) as claimed in claim 1, wherein the AR engine (108) is configured to track a position, location, orientation, posture, and dynamic movements of the user, wherein movement types comprise at least one of walking, running, dancing, jumping, whirling and a combination thereof.

12. The system (10) as claimed in claim 11, wherein the processor (104) is further configured to update the user body model (500) based on the dynamic the movements of the user in a three-dimensional environment and update the object model (400) overlaying the user body model (500) based on the dynamic the movements of the user.
13. The system (10) as claimed in claim 1, wherein the plurality of characteristics of the at least one fabric comprises at least one of stretch, stiffness, drape, color, thickness, size, texture a fabric.
14. A method for providing real-time virtual try-on experience of wearing an object made of a selected fabric, the method being performed by the computing device (100) that comprises a a memory (101) configured to store a set of instructions, a plurality of body models, and a plurality of fabric images and a processor (104) configured to execute the set of instructions, wherein the method comprises:
generating, using a fabric model creation module (106), a fabric model based on at least one image of a fabric of interest;
generating, using a fabric model creation module (106), a fabric layer based on the generated fabric model, wherein the fabric layer mimics a plurality of characteristics of the fabric of interest;
obtaining, from the server (110), at least one object model (400) corresponding to at least one object;
generating, using an augmented reality (AR) engine (108), a superimposed model (600) by overlaying the at least one object model (400) on a user body model (500);
generating (S207), using the AR engine (108), an image representing a real-time virtual view of a user wearing an object made of the selected fabric by overlaying the fabric layer on the superimposed model (600); and
displaying (S208), using the display unit (105), the image representing the real-time virtual view of the user wearing an object made of a selected fabric.
15. The method as claimed in claim 14, wherein the method comprises generating, using user body model creation module (105), the user body model (500) based on at least one of a captured image of the user or a first user input.
16. The method as claimed in claim 15, wherein the user body model (500) is generated by obtaining a first body model from the plurality of body models and fixing an image of a user face on the first body model of the plurality of body models.
17. The method as claimed in claim 15, wherein the user body model (500) is generated by determining at least one of skeletal information of the user, or user characteristics comprising one or more index or articulation points, user dimension, user's overlying musculature and surface features, filling in the user's outer dimensions based on the at least one of the captured image of the user or the first user input, and generating the user body model (500) based on the at least one of skeletal information of the user, or the user characteristics.
18. The method as claimed in claim 14, wherein the method comprises overlaying the at least one object model (400) on the user body model (500) by matching articulation points of the user body model (500) to corresponding articulation points of the object model (400).

19. The method as claimed in claim 14, wherein the method comprises comparing dimensions of the fabric layer with dimensions of the superimposed model (600) and adjusting the dimensions of the fabric layer based on the dimensions of the superimposed model (600) for overlaying the fabric layer on the superimposed model (600).

20. The method as claimed in claim 14, wherein the method comprises modifying a color of the fabric layer based on the user input on real time basis.

Documents

Application Documents

# Name Date
1 202211057310-STATEMENT OF UNDERTAKING (FORM 3) [06-10-2022(online)].pdf 2022-10-06
2 202211057310-PROVISIONAL SPECIFICATION [06-10-2022(online)].pdf 2022-10-06
3 202211057310-FORM 1 [06-10-2022(online)].pdf 2022-10-06
4 202211057310-DRAWINGS [06-10-2022(online)].pdf 2022-10-06
5 202211057310-DRAWING [06-10-2023(online)].pdf 2023-10-06
6 202211057310-CORRESPONDENCE-OTHERS [06-10-2023(online)].pdf 2023-10-06
7 202211057310-COMPLETE SPECIFICATION [06-10-2023(online)].pdf 2023-10-06