Abstract: ABSTRACT METHOD OF GENERATING A THREE DIMENSIONAL (3D) MODEL OF AN OBJECT The present invention describes a method of generating a three dimensional (3D) model of a real world object. In one embodiment, the present invention creates an augmented reality (AR) viewport for 3d modeling of a real world object which is captured live using an AR camera. The AR viewport is made to align with the real world object captured live using the AR camera as reference on a 3D modeling tool interface. A user is then allowed to create a 3D mesh model of the real world object using the AR viewport, where the real world object is available as reference along with the 3D modeling interface. The present invention further provides real time feedback for the user for each of his inputs during modelling so that the inputs are autocorrected based on the real world reference object. Figure 4
DESC:FORM 2
THE PATENTS ACT, 1970
[39 of 1970]
&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
(Section 10; Rule 13)
METHOD OF GENERATING A THREE DIMENSIONAL (3D) MODEL OF AN OBJECT
SAMSUNG R&D INSTITUTE INDIA – BANGALORE Pvt. Ltd.
# 2870, ORION Building, Bagmane Constellation Business Park,
Outer Ring Road, Doddanekundi Circle,
Marathahalli Post,
Bangalore -560037, Karnataka, India
Indian Company
The following Specification particularly describes the invention and the manner in which it is to be performed
RELATED APPLICATION
The present invention claims the benefit of the Indian Provisional Application No. 2643/CHE/2015 titled “AUGMENTED 3D TRACING TECHNIQUE” by Samsung R&D Institute India – Bangalore Private Limited, filed on 26th May 2015, which is herein incorporated in its entirety by reference for all purposes.
FIELD OF THE INVENTION
The present invention generally relates to graphics and animation, and more particularly relates to a method of generating a three dimensional (3D) model of an object using augmented reality (AR).
BACKGROUND OF THE INVENTION
Computer graphics systems can acquire or synthesize images of real or imaginary objects and scenes, and then reproduce these in a virtual world. The computer systems have also attempted to do the reverse which is to insert computer graphics images into the real world. This is mainly performed for adding special effects in movies, and for real-time augmented reality (AR).
Currently, there is a trend to use light projectors to render imagery directly in real physical environments. In augmented reality, technology is used to layer virtual content over the existing physical world. Few applications of augmented reality are as follows. An AR device could price match shopping items via the web as you look at them, visually identify landmarks and intersections to provide you with directions or historical information, enhance your natural vision, or even provide vital patient information to a surgeon during a medical procedure. Applications as simple as helping you to visualize furniture placements in a room and as complicated as providing a soldier with a 360-degree tactical view of a battlefield have already been developed.
In most of the above scenarios, 3D content modelling is an important aspect and also a challenging task which requires a lot of creativity and perfection. There are various tools providing 3D modelling functionality where the techniques applied are fully manual or fully automatic. Manual tools are more flexible, but when working with them, the real reference object and the created 3D model do not have a common reference point to achieve perfection. Also, it needs lot of learning curve and creativity to work with such tools. Thus, there is a need for a method to improve the 3D mesh model creation, make it easier for normal users and achieve perfection in 3D content creation.
SUMMARY OF THE INVENTION
The various embodiments herein disclose a method of generating a three dimensional (3D) model of an object, the method comprising: creating an augmented reality (AR) viewport for 3d modeling of a real world object captured live using a AR camera, aligning the AR viewport with the real world object, and allowing user to create a 3D mesh model of the real world object using the AR viewport where the real world object is available as reference along with the 3D modeling interface.
According to an embodiment of the present invention, creating the 3D mesh model of the real world object comprises of determining one or more reference points on the object in the AR view, wherein the reference point corresponds to at least one of an edge and boundary of the object.
According to one embodiment of the present invention, allowing the user to trace over the one or more reference points on the object in AR view; and automatically aligning/extending at least an edge of the 3D mesh model of the object to at least one reference point on the object in response to user snapping/sweeping towards the edge of the object in AR view.
According to one embodiment, the present invention further comprises of providing real time feedback for the user for each of his inputs during modelling so that the inputs are autocorrected based on the real world reference object.
According to one embodiment of the present invention, aligning the AR viewport with the real world object captured live using the AR camera as reference on a 3D modeling tool interface comprises of retrieving 3D perspective information of the real world object from the 2D image captured continuously from the AR camera, and displaying the constructed 3D mesh model of the real world object in sync with the real world object based on the retrieved 3D perspective information.
According to one embodiment of the present invention, automatically aligning at least an edge of the 3D mesh model of the real world object comprises of comparing the trace of the real world object made by the user in AR viewport with the real world object as reference, detecting deviations in at least an edge of the traced real world object, and automatically aligning/extending at least an edge of the traced real world object based on deviations.
According to one embodiment of the present invention, the method further comprises of allowing the user to re-size the created 3D mesh model manually to match with the real world object.
According to one embodiment of the present invention, the method further comprises of generating 3D mesh model of an asymmetric object by tracing the asymmetric object from multiple angles in the AR viewport to capture all details associated with the asymmetric object.
According to one embodiment of the present invention, the method further comprises of generating texture information by mapping the constructed 3D mesh model with the real world object to extract the relevant 2D content from different perspectives, and combining the relevant 2D content extracted from different perspectives to form a single texture.
Embodiments herein further disclose a system for generating a three dimensional (3D) model of an object, comprising an AR camera equipped in any kind of a device for capturing a real world object, a perspective detection unit adapted for retrieving 3D information associated with the real world object, a user input module for creating 3D mesh model of the real world object, an image processing module for processing the captured real world object, and an augmented reality (AR) module configured for creating an augmented reality (AR) viewport for 3d modeling of a real world object captured live using a AR camera, aligning the viewport with the real world object , and allowing user to create a 3D mesh model of the real world object using the AR viewport where the real world object is available as reference along with the 3D modeling interface.
The foregoing has outlined, in general, the various aspects of the invention and is to serve as an aid to better understanding the more complete detailed description which is to follow. In reference to such, there is to be a clear understanding that the present invention is not limited to the method or application of use described and illustrated herein. It is intended that any other advantages and objects of the present invention that become apparent or obvious from the detailed description or illustrations contained herein are within the scope of the present invention.
BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWINGS
The other objects, features and advantages will occur to those skilled in the art from the following description of the preferred embodiment and the accompanying drawings in which:
Figure 1A and 1B are schematic diagrams illustrating different ways of generating a three dimensional (3D) model of a real world object using an augmented reality (AR) viewport, according to one embodiment.
Figure 2 is a schematic diagram illustrating an exemplary way of creating a three dimensional model (3D) of a real world object with a hand held device using an AR viewport having a real world object as reference, according to one embodiment.
Figure 3 is a schematic diagram representing 3D mesh modifications with augmented reality viewport, according to an embodiment of the present invention.
Figure 4 is a system diagram illustrating one or more components for generating a three dimensional model of a real world object, according to one embodiment.
Although specific features of the present invention are shown in some drawings and not in others, this is done for convenience only as each feature may be combined with any or all of the other features in accordance with the present invention.
DETAILED DESCRIPTION OF THE INVENTION
The embodiments of the present invention will now be described in detail with reference to the accompanying drawings. However, the present invention is not limited to the embodiments. The present invention can be modified in various forms. Thus, the embodiments of the present invention are only provided to explain more clearly the present invention to the ordinarily skilled in the art of the present invention. In the accompanying drawings, like reference numerals are used to indicate like components.
The specification may refer to “an”, “one” or “some” embodiment(s) in several locations. This does not necessarily imply that each such reference is to the same embodiment(s), or that the feature only applies to a single embodiment. Single features of different embodiments may also be combined to provide other embodiments.
As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless expressly stated otherwise. It will be further understood that the terms “includes”, “comprises”, “including” and/or “comprising” when used in this specification, specify the presence of stated features, integers, steps, operations, elements and/or components, but do not preclude the presence or addition of one or more other features integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any and all combinations and arrangements of one or more of the associated listed items.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure pertains. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The present invention applies to any kind of device such as a head mounted display, a tablet and a handheld device having a augmented reality (AR) camera. Figure 1A and 1B are schematic diagrams illustrating different ways of generating a three dimensional (3D) model of a real world object using an augmented reality (AR) viewport, according to one embodiment.
A schematic representation of creating 3D model of a real world object using head mounted display is shown in figure 1A and using a tablet as shown in figure 1B. In one embodiment, consider that a user is viewing a real world object using the head mounted display. In this embodiment, interaction with the real world object is very futuristic as the user uses hand gestures to draw boundaries of the real world object
In another embodiment, the user uses a rear view camera of a tablet to view/capture the real world object. In this embodiment, the user draws a boundary of the real world object on the camera interface. In both the embodiments, the boundary values provide 2D information associated with the real world object. Then 3D information about this input is retrieved using perspective information of the real world object captured in the augmented reality (AR) viewport. It is to be noted that the created model in the AR viewport is in sync with the real world object which enables the user to create and edit the 3D mesh model more efficiently.
Figure 2 is a schematic diagram illustrating an exemplary way of creating a three dimensional model (3D) of a real world object with a head mounted display using an AR viewport having the real world object as reference, according to one embodiment.
As shown in figure 2, the user uses hand gestures to draw boundary of the real world object being viewed using the head mounted display. In turn, the co-ordinates or boundary values of the real world object are intelligently retrieved by three dimensional (3D) modeling tool and a 3D spline curve of the real world object is created in the AR viewport using the retrieved co-ordinates. The created 3D spline curve of the real world object is in sync with the real world object being viewed using the head mounted display. The 3D spline can then be used to construct a basic concrete shape using any basic modeling techniques like for example extrude operation. Further modeling techniques can also be applied to attain perfection.
Figure 3 is a schematic diagram representing 3D mesh modifications with augmented reality (AR) viewport, according to an embodiment of the present invention. Once the AR viewport of the real world object is created, the user is allowed to trace over the AR viewport of the real world objet to generate 3D mesh model. The step by step process of generating 3D mesh model of the real world object is explained as follows. Consider that the user wants to create 3D model for a water bottle as shown in figure 3. At first, the user uses hand gestures to draw boundary of the water bottle. The co-ordinates associated with the boundary of the water bottle is intelligently retrieved by the 3D modelling tool interface and using perspective 3D information of the water bottle a virtual 3D model of the water bottle is created. The 3D model of the water bottle is created in such a way that the real world object and the created model synchronizes with each other. Thus the AR viewport along with the modelling interface allows the user to work with the created 3D model with a the actual real world object acting as reference.
Further, the AR viewport provides real time feedback for the user for each of his inputs during modeling of the water bottle, so that the inputs are autocorrected based on the real world reference object. For example, if the user tries to draw a curve for generating a 3D model, the AR viewport automatically snaps the curve to the boundary of the water bottle. Also, if the user sweeps/snaps towards an edge of the water bottle in AR view, the tool automatically aligns/extends at least an edge of the 3D mesh model of the water bottle to at least one reference point on the water bottle.
Figure 4 is a system diagram illustrating one or more components for generating a three dimensional model of a real world object, according to one embodiment. As shown in Figure 4, the system consists of a camera 404 embedded with a handheld device to capture a live real world object 402, a perspective detection unit 406, an image processing unit 408, an edge detection unit, a material extraction unit 412, a real world comparison unit 414, an AR viewport module 416 and a user input module 418.
In one aspect, the present invention allows a user to create 3D model of a real world object. As shown in figure 4, the real world object 402 is captured live using the camera 404. However, the camera 404 outputs 2D information associated with the real world object 402. The 2D information associated with the real world object 402 is then sent to an image processing module 408 and perspective detection unit 406. The image processing unit 408 further comprises of an edge detection unit 410, a material extraction unit 414 and a real world comparison unit 414. The edge detection unit 410 detects one or more reference points associated with the real world object 402, wherein the one or more reference points corresponds to boundary values of the real world object 402. The material extraction unit 412 is used for generating texture information of the real world object 402 by mapping the constructed 3D model with the real world object to extract the relevant 2D content from different perspectives and then combining the relevant 2D content extracted from different perspectives to form a single texture. The real world comparison unit 414 compares the real world object 402 and the created 3D augmented content and aids the user to autocorrect the errors.
In one embodiment, the perspective detection unit 406 uses existing marker and non-marker based solutions to define the perspective for AR viewport. The perspective detection unit 406 provides 3D information associated with the real world object 402 based on the obtained 2D information. The AR viewport module 416 use the same perspective 3D information to render the augmented 3d content so that the real world object 402 and the created model of the real world object aligns together. A compositor in the AR viewport module 416 performs the job of combining the 2D content from the camera along with the 2D rendering of the created 3D model based on the retrieved 3D perspective from the real world object. .
In one exemplary operation, the user uses the camera view of the real world object 402 as reference while creating 3D model of the real world object 402. Using the real world object as reference, the user traces over the object as seen on the AR viewport on the 3D modelling tool interface. The user can provide his input via a user input module 420. When the user draws a 3D spline curve, the tool automatically retrieves information of the real world object 402 from the image processing unit 408 and provides a real time feedback to the user. The real time feedback may comprise one of: performing auto correction of the curve when the user deviates from the augmented 3D content rendered on the AR viewport, automatically snapping the curve towards at least one boundary of the real world object to user input etc. Further, the tool allows to automatically align/extend to at least an edge of the 3D mesh model of the real world object in response to user snapping/sweeping towards the edge of the object in AR view. In another embodiment, the tool allows the user to re-size the created 3D mesh model manually to match with the real world object. Using the present invention, the user can also generate 3D mesh model of an asymmetric object by tracing the asymmetric object from multiple angles (360 degree view) in the AR viewport to capture all details associated with the asymmetric object. Further, the user can take an already existing mesh model template and customize the template using the AR view of a real world object.
Although the invention of the method and system has been described in connection with the embodiments of the present invention illustrated in the accompanying drawings, it is not limited thereto. It will be apparent to those skilled in the art that various substitutions, modifications and changes may be made thereto without departing from the scope and spirit of the invention.
,CLAIMS:
We Claim:
1. A method of generating a three dimensional (3D) model of an object, the method comprising:
creating an augmented reality (AR) viewport for 3D modeling of a real world object captured live using an AR camera;
aligning the AR viewport with the real world object captured live using the AR camera as reference on a 3D modeling tool interface; and
allowing user to create a 3D mesh model of the real world object using the AR viewport where the real world object is available as reference along with the 3D modeling interface.
2. The method as claimed in claim 1, wherein creating the 3D mesh model of the real world object comprises:
determining one or more reference points on the object in the AR view, wherein the reference point corresponds to at least one of an edge and boundary of the object;
allowing the user to trace over the one or more reference points on the object in AR view; and
automatically aligning at least an edge of the 3D mesh model to at least one reference point on the object in response to the user sweeping towards the at least one reference point in AR viewport.
3. The method as claimed in claim 1, further comprising:
providing real time feedback for the user for each of his inputs during modelling so that the inputs are autocorrected based on the real world reference object.
4. The method as claimed in claim 1, wherein aligning the AR viewport with the real world object captured live using the AR camera as reference on a 3D modeling tool interface comprises:
retrieving 3D perspective information of the real world object from the 2D image captured continuously from the AR camera; and
displaying the constructed 3D mesh model of the real world object in sync with the real world object based on the retrieved 3D information.
5. The method as claimed in claim 2, wherein automatically aligning/extending at least an edge of the 3D mesh model of the real world object comprises:
comparing the trace of the real world object made by the user in AR viewport with the real world object as reference;
detecting deviations in at least an edge of the traced real world object; and
automatically aligning at least an edge of the traced real world object based on deviations.
6. The method as claimed in claim 1, further comprising:
allowing the user to re-size the created 3D mesh model manually to match with the real world object.
7. The method as claimed in claim 1, further comprising:
generating 3D mesh model of an asymmetric object by tracing the asymmetric object from multiple angles in the AR viewport to capture all details associated with the asymmetric object.
8. The method as claimed in claim 1, further comprising:
generating texture information by mapping the constructed 3D model with the real world object to extract the relevant 2D content from different perspectives; and
combining the relevant 2D content extracted from different perspectives to form a single texture.
9. A system for generating a three dimensional (3D) model of an object, comprising:
An AR camera equipped in any kind of a device for capturing a real world object;
A perspective detection unit adapted for retrieving 3D information associated with the real world object;
A user input module for creating 3D mesh model of the real world object;
An image processing module for processing the captured real world object; and
An augmented reality (AR) module configured for:
creating an augmented reality (AR) viewport for 3D modeling of a real world object captured live using a AR camera;
aligning the AR viewport with the real world object captured live using the AR camera as reference on a 3D modeling tool interface; and
allowing user to create a 3D mesh model of the real world object using the AR viewport where the real world object is available as reference along with the 3D modeling interface.
10. The system as claimed in claim 9, wherein in creating the 3D mesh model of the real world object, the AR viewport module is adapted for:
determining one or more reference points on the object in the AR view, wherein the reference point corresponds to at least one of an edge and boundary of the object;
allowing the user to trace over the one or more reference points on the object in AR view; and
automatically aligning at least an edge of the 3D mesh model of the object to at least one reference point on the object in response to user snapping towards the edge of the object in AR view.
11. The system as claimed in claim 9, wherein in aligning the AR viewport with the real world object captured live using the AR camera as reference on a 3D modeling tool interface comprises:
a perspective detection unit for retrieving 3D perspective information of the real world object from the 2D image captured continuously from the AR camera; and
a compositor for displaying the constructed 3D mesh model of the real world object in sync with the real world object based on the retrieved 3D information.
12. The system as claimed in claim 10, wherein in automatically aligning/extending at least an edge of the 3D mesh model of the real world object, the AR viewport module is adapted for:
comparing the trace of the real world object made by the user in AR viewport with the real world object as reference;
detecting deviations in at least an edge of the traced real world object; and
automatically aligning/extending at least an edge of the traced real world object based on deviations.
Dated this the 1st day of October 2015
Signature
KEERTHI JS
Patent agent
Agent for the applicant
| # | Name | Date |
|---|---|---|
| 1 | 2643-CHE-2015-IntimationOfGrant23-12-2022.pdf | 2022-12-23 |
| 1 | SRIB-20150512-003_Provisional Specification_Filed with IPO on 26th May.pdf | 2015-06-04 |
| 2 | SRIB-20150512-003_Drawings_Filed with IPO on 26th May.pdf | 2015-06-04 |
| 2 | 2643-CHE-2015-PatentCertificate23-12-2022.pdf | 2022-12-23 |
| 3 | POA_Samsung R&D Institute India-new.pdf | 2015-06-04 |
| 3 | 2643-CHE-2015-FER_SER_REPLY [13-07-2020(online)].pdf | 2020-07-13 |
| 4 | SRIB-20150512-003_Provisional Specification_Filed with IPO on 26th May.pdf_681.pdf | 2015-06-24 |
| 4 | 2643-CHE-2015-PETITION UNDER RULE 137 [13-07-2020(online)].pdf | 2020-07-13 |
| 5 | SRIB-20150512-003_Drawings_Filed with IPO on 26th May.pdf_682.pdf | 2015-06-24 |
| 5 | 2643-CHE-2015-FER.pdf | 2020-01-13 |
| 6 | POA_Samsung R&D Institute India-new.pdf_683.pdf | 2015-06-24 |
| 6 | 2643-CHE-2015-FORM 13 [06-08-2019(online)].pdf | 2019-08-06 |
| 7 | OTHERS [01-10-2015(online)].pdf | 2015-10-01 |
| 7 | 2643-CHE-2015-FORM-26 [05-08-2019(online)].pdf | 2019-08-05 |
| 8 | Form-2(Online).pdf | 2016-09-28 |
| 8 | Drawing [01-10-2015(online)].pdf | 2015-10-01 |
| 9 | Form-18(Online).pdf | 2016-09-26 |
| 9 | Description(Complete) [01-10-2015(online)].pdf | 2015-10-01 |
| 10 | 2643-CHE-2015-Power of Attorney-211215.pdf | 2016-06-10 |
| 10 | ABSTRACT -2643-CHE-2015.jpg | 2016-09-21 |
| 11 | 2643-CHE-2015-Correspondence-F1-PA-211215.pdf | 2016-06-10 |
| 11 | 2643-CHE-2015-Form 1-211215.pdf | 2016-06-10 |
| 12 | 2643-CHE-2015-Correspondence-F1-PA-211215.pdf | 2016-06-10 |
| 12 | 2643-CHE-2015-Form 1-211215.pdf | 2016-06-10 |
| 13 | 2643-CHE-2015-Power of Attorney-211215.pdf | 2016-06-10 |
| 13 | ABSTRACT -2643-CHE-2015.jpg | 2016-09-21 |
| 14 | Description(Complete) [01-10-2015(online)].pdf | 2015-10-01 |
| 14 | Form-18(Online).pdf | 2016-09-26 |
| 15 | Drawing [01-10-2015(online)].pdf | 2015-10-01 |
| 15 | Form-2(Online).pdf | 2016-09-28 |
| 16 | 2643-CHE-2015-FORM-26 [05-08-2019(online)].pdf | 2019-08-05 |
| 16 | OTHERS [01-10-2015(online)].pdf | 2015-10-01 |
| 17 | 2643-CHE-2015-FORM 13 [06-08-2019(online)].pdf | 2019-08-06 |
| 17 | POA_Samsung R&D Institute India-new.pdf_683.pdf | 2015-06-24 |
| 18 | 2643-CHE-2015-FER.pdf | 2020-01-13 |
| 18 | SRIB-20150512-003_Drawings_Filed with IPO on 26th May.pdf_682.pdf | 2015-06-24 |
| 19 | SRIB-20150512-003_Provisional Specification_Filed with IPO on 26th May.pdf_681.pdf | 2015-06-24 |
| 19 | 2643-CHE-2015-PETITION UNDER RULE 137 [13-07-2020(online)].pdf | 2020-07-13 |
| 20 | POA_Samsung R&D Institute India-new.pdf | 2015-06-04 |
| 20 | 2643-CHE-2015-FER_SER_REPLY [13-07-2020(online)].pdf | 2020-07-13 |
| 21 | SRIB-20150512-003_Drawings_Filed with IPO on 26th May.pdf | 2015-06-04 |
| 21 | 2643-CHE-2015-PatentCertificate23-12-2022.pdf | 2022-12-23 |
| 22 | SRIB-20150512-003_Provisional Specification_Filed with IPO on 26th May.pdf | 2015-06-04 |
| 22 | 2643-CHE-2015-IntimationOfGrant23-12-2022.pdf | 2022-12-23 |
| 1 | SearchStrategyAE_24-06-2021.pdf |
| 1 | SearchStrategyMatrix_10-01-2020.pdf |
| 2 | SearchStrategyAE_24-06-2021.pdf |
| 2 | SearchStrategyMatrix_10-01-2020.pdf |