Sign In to Follow Application
View All Documents & Correspondence

System And Method For Providing Automatic Digitization Of An Object

Abstract: Embodiments of the present disclosure facilitate to provide an automatic digitization of an object. The systems and methods, illustrated in Figure 1, facilitates in sequentially projecting each of a set of predefined images on the object using an image projector. Images of the illuminated object are captured using a pair of cameras. Captured images of the object, the set of predefined images for illumination, and calibration parameters associated with the pair of cameras are received at a computing device. The computing device automatically determines a correspondence between subsets of pixels in the two cameras independently of their calibration parameters, and then automatically determines a digitization of the portion of the object that falls within the field of view of the image projector and the cameras. Furthermore, the digitized portion of the object is reconstructed as a three- dimensional surface that is parameterized over a planar region.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
27 June 2020
Publication Number
53/2021
Publication Type
INA
Invention Field
PHYSICS
Status
Email
info@khuranaandkhurana.com
Parent Application
Patent Number
Legal Status
Grant Date
2023-12-07
Renewal Date

Applicants

Indian Institute of Science
C V Raman Road, Bangalore -560012, Karnataka, India.

Inventors

1. RANGARAJAN, Ramsharan
Indian Institute of Science, C V Raman Road, Bangalore - 560012, Karnataka, India.

Specification

Claims:1. A system to facilitate providing an automatic digitization of an object, said system comprising:
a pair of cameras; an image projector; a set of predefined images for illumination, where upon the object being placed within a field of view of the image projector and the pair of cameras, sequentially projecting each of the set of predefined images on the object using the image projector, and capturing images of the object thus illuminated with the pair of cameras; and a computing device associated with a processor, where a set of data packets being received at the processor are used to automatically determine a digitization of a portion of the object falling within the field of view of the pair of cameras and the image projector, and a correspondence between a subset of pixels in a first camera with a subset of pixels in a second camera of the pair of cameras independent of a set of calibration parameters of the pair of cameras.
2. The system as claimed in claim 1, wherein the set of data packets pertain to the captured images of the object, the set of predefined images for illumination, and a set of calibration parameters associated with the pair of cameras.
3. The system as claimed in claim 1, wherein the set of predefined images for illumination are embedded with hierarchically refined binary-encoded patterns oriented along coordinate lines of a general curvilinear planar coordinate system.
4. The system as claimed in claim 1, wherein upon receiving the set of data packets, the computing device automatically determines a reconstruction of the portion of the object that falls within the field of view of the image projector and the pair of cameras, where the portion of the object is reconstructed as a three-dimensional surface that is parameterized over a planar region.
5. The system as claimed in claim 4, wherein the three-dimensional surface that is parameterized over a planar region is alternately represented either as a three- dimensional quadrangulation that is parameterized over the planar region, or as a three-dimensional triangulation parameterized over the planar region.
35
6. The system as claimed in claim 1, wherein the object has any of a three dimensional surface or a two dimensional surface.
7. The system as claimed in claim 1, wherein a projection of the set of predefined images embedded with a greater hierarchical-depth of binary-encoded patterns facilitates in reconstruction of the object with higher resolution of details.
8. The system as claimed in claim 1, wherein accuracy of a plurality of reconstructed points sampling the reconstructed portion of the object is maintained irrespective of the hierarchical-depth of the binary-encoded patterns embedded in a set of projected images.
9. The system as claimed in claim 1, wherein when the object is reconstructed from a plurality of orientations, a representation for exposed portions of the object is produced as an atlas of charts.
10. A method to facilitate providing an automatic digitization of an object, said method comprising:
sequentially projecting each of a set of predefined images on the object using an image projector, and capturing images of the object thus illuminated with a pair of cameras; and receiving, at a processor associated with a computing device, a set of data packets to automatically determine a digitization of a portion of the object falling within the field of view of the pair of cameras and the image projector, and a correspondence between a subset of pixels in a first camera with a subset of pixels in a second camera of the pair of cameras independent of a set of calibration parameters of the pair of cameras. , Description:TECHNICAL FIELD
[0001] The present disclosure relates generally to systems and methods for automatic digitization and surface reconstruction for objects using computer vision techniques.
BACKGROUND
[0002] Computer vision techniques aimed at constructing three-dimensional virtual models by imaging physical objects are routinely used in video games, movie special effects,
preservation/restoration/digitization of artifacts, customized medical prosthetics, ecommerce, quality inspection in manufacturing processes and robotics applications.
Widespread adoption of computer vision techniques, especially by enthusiasts and
researchers outside the science, technology, engineering and mathematics (STEM)
disciplines, relies on the availability of inexpensive hardware solutions and robust software algorithms.
[0003] A typical pipeline for creating a virtual model using structured light imaging
consists in triangulating corresponding image pixels from distinct camera views to compute a point cloud sampling of the object surface, registering clouds measured from multiple views,
removing noise/outliers and meshing the edited point cloud, which can then be forwarded to a 3D printer for instance.
[0004] Active illumination techniques using structured light patterns project artificial textures onto the scene to establish correspondences between camera images captured from multiple views. A guiding principle behind the different variants is to exploit the illumination
pattern to assign unique identities to points in the scene. An ingenious example notable for its simplistic scanning apparatus consists only in sweeping a line shadow to index regions of a surface along the sweep direction.
[0005] Patterns using intensity ratios from gray scale illumination generally require a datum image under constant lighting and therefore become multi shot techniques. A unified
interpretation of gray codes and intensity ratio techniques in the context of optimizing
projected patterns is available. The benefit of simple decodification in such time multiplexing
techniques comes at the expense of requiring the scene to be stationary. The possibility of
imaging moving objects expectedly requires motions in the scene to be sufficiently slow, in
order to avoid temporal correlation errors.
2
[0006] Spatial multiplexing methods requiring a single illumination pattern to be projected, on the other hand, are well suited for imaging dynamic scenes. This feature however comes with the burden of competing requirements on the illumination pattern— that it distinctly encode multiple spatial locations and also be easily decodable. Patterns typically
consist of a small collection of distinguishable colors and/or an alphabet of geometrical shapes arranged in a special sequence. The arrangement of elements is chosen in such a way
that inspecting a window around each element furnishes a unique code word, thereby
indexing its spatial location. The idea of using color-coded slits represents an early attempt in
this direction.
[0007] Further, common challenges in single shot imaging include occlusions, cross talk between color channels, disconnected grid lines, dislocated fringes, ghost boundaries caused by texture/geometry, and distortion of projected patterns. All these issues render the correspondence problem more difficult to resolve. To this end, dynamic programming
methods, deep learning techniques and adaptive color discrimination algorithms can help
improve the fault tolerance of decoding strategies.
[0008] Scanning a surface represents only the first step in the pipeline for producing a digital model. Of direct concern is the fact that scanning techniques invariably only produce a point cloud representation of an object’s surface. The inherent limitations of such
representations are well recognized. For example, a small amount of noise can result in
assigning an incorrect topology to the surface. Handling data sets with non-uniform point densities, representing objects with sharp edges/creases or even segmenting point clouds into
disconnected sets continue to remain challenging problems. Nevertheless, it is essential to
transform scan data into surface models for use in computer aided design, for reverse engineering workflows and in additive manufacturing processes.
[0009] To tackle the above mentioned and multiple similar issues various algorithms have been proposed to either directly handle points as geometric primitives or to transform point clouds into surface representations. Algorithms of the latter type generally construct triangulated surfaces, which, in turn, serve as starting points for constructing parametric surface patches. Alternately, a scanned point cloud can be converted to an implicit representation as the zero level set of a function, and subsequently triangulated using a marching cubes or a Delaunay triangulation algorithm. An inherent issue in defining a surface starting from a point cloud arises from the fact that the problem is ill-posed— the solution is necessarily non unique. For a given point sampling, picking out one among the set of all candidate surfaces fitting the data requires specifying additional constraints, often
3
4
conveyed through user-defined parameters. Consequently, integrity of computed surfaces can only be verified a posteriori. For this reason, remeshing operations to repair triangulated
surfaces become inevitable. [0010] There is therefore a need in the art to provide a robust and automatic mechanism to overcome the limitations mentioned above. OBJECTS OF THE INVENTION [0011] Some of the objects of the present disclosure, which at least one embodiment herein satisfies are as listed herein below. [0012] It is an object of the present disclosure to facilitate providing design and
mechanical analysis of physical prototypes. [0013] It is an object of the present disclosure to facilitate providing a digitization of
prototypes and manipulating the prototypes directly in CAD software for design
modifications or for reverse engineering.
[0014] It is an object of the present disclosure to facilitate creating textures, either
from or for a reconstructed surface, for applications in computer graphics, computer
animation, virtual reality and product design.
[0015] It is an object of the present disclosure to facilitate providing a non invasive quality control in manufacturing processes to inspect geometric dimensions and deviation
from tolerances. [0016] It is an object of the present disclosure to facilitate providing a digitization of
historical artifacts for preservation and to facilitate in the creation and curation of digital museums.
[0017] It is an object of the present disclosure to facilitate providing personalized
medical implants, prosthetic devices and form-fitting appendages. [0018] It is an object of the present disclosure to facilitate virtual try-on for apparel and related fashion products, eyewear, footwear and facial masks SUMMARY
[0019] The present disclosure relates generally to systems and methods for automatic digitization and surface reconstruction for objects using computer vision techniques. [0020] An aspect of the present disclosure pertains to a system to facilitate providing
an automatic digitization of an object, said system comprising: a pair of cameras; an image projector; a set of predefined images for illumination, where upon the object being placed
5
within a field of view of the image projector and the pair of cameras, sequentially projecting
each of the set of predefined images on the object using the image projector, and capturing
images of the object thus illuminated with the pair of cameras; and a computing device associated with a processor, where a set of data packets being received at the processor are used to automatically determine a digitization of a portion of the object falling within the field of view of the pair of cameras and the image projector, and a correspondence between a subset of pixels in a first camera with a subset of pixels in a second camera of the pair of
cameras is determined independently of a set of calibration parameters of the pair of cameras. [0021] According to an embodiment, the set of data packets pertain to the captured
images of the object, the set of predefined images for illumination, and a set of calibration
parameters associated with the pair of cameras. [0022] According to an embodiment, the set of predefined images for illumination are embedded with hierarchically refined binary-encoded patterns oriented along coordinate lines of a general curvilinear planar coordinate system. [0023] According to an embodiment, upon receiving the set of data packets, the computing device automatically determines a reconstruction of the portion of the object that falls within the field of view of the image projector and the pair of cameras, where the portion of the object is reconstructed as a three-dimensional surface that is parameterized
over a planar region. [0024] According to an embodiment, the three-dimensional surface that is parameterized over a planar region is alternately represented either as a three-dimensional quadrangulation that is parameterized over the planar region, or as a three-dimensional triangulation parameterized over the planar region. [0025] According to an embodiment, the object has any of a three dimensional surface or a two dimensional surface. [0026] According to an embodiment, a projection of the set of predefined images embedded with a greater hierarchical-depth of binary-encoded patterns facilitates in
reconstruction of the object with higher resolution of details. [0027] According to an embodiment, accuracy of a plurality of reconstructed points sampling the reconstructed portion of the object is maintained irrespective of the hierarchical-
depth of the binary-encoded patterns embedded in a set of projected images. [0028] According to an embodiment, when the object is reconstructed from a plurality of orientations, a representation for exposed portions of the object is produced as an
atlas of charts.
6
[0029] An aspect of the present disclosure pertains to a method to facilitate providing
an automatic digitization of an object, said method comprising: sequentially projecting each
of a set of predefined images on the object using an image projector, and capturing images of
the object thus illuminated with a pair of cameras; and receiving, at a processor associated
with a computing device, a set of data packets to automatically determine a digitization of a portion of the object falling within the field of view of the pair of cameras and the image projector, and a correspondence between a subset of pixels in a first camera with a subset of
pixels in a second camera of the pair of cameras is determined independently of a set of
calibration parameters of the pair of cameras. BRIEF DESCRIPTION OF THE DRAWINGS [0030] In the figures, similar components and/or features may have the same reference label. Further, various components of the same type may be distinguished by
following the reference label with a second label that distinguishes among the similar
components. If only the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label. [0031] FIG. 1 illustrates exemplary structured light imaging setup in accordance with
an embodiment of the present disclosure. [0032] FIGs. 2A-B illustrate illumination patterns following a polar coordinate system and corresponding graph of code words in accordance with an embodiment of the present disclosure. [0033] FIG. 3 illustrates an exemplary visualization of encoded pixel blocks computed from camera images of an artifact in accordance with an embodiment of the present disclosure. [0034] FIG. 4 illustrates an explanation of identification of boundaries of pixel blocks and their partitioning into components in accordance with an embodiment of the present disclosure. [0035] FIGs. 5A-C illustrates an exemplary conceptualization of a hypotheses and
their realization in accordance with an embodiment of the present disclosure. [0036] FIGs. 6A-B illustrates an exemplary visualization of pixel blocks in a camera image in accordance with an embodiment of the present disclosure. [0037] FIGs. 7A-C illustrates exemplary quadrilateral meshes representing the reconstructed surfaces of a mask when using successively refined illumination patterns
7
consisting of horizontal and vertical stripes in accordance with an embodiment of the present disclosure. [0038] FIGs. 7D-E illustrate an experimental realization of error in digitization when
using patterns oriented along grid lines following a Cartesian system and an elliptic system,
respectively, in accordance with an embodiment of the present disclosure. [0039] FIG. 8 illustrates parameterization of a reconstructed surface over a curvilinear
coordinate system determined by a projected illumination in which patterns follow grid lines of a general curvilinear coordinate system, in accordance with embodiments of the present disclosure. [0040] FIG. 9 illustrates an experimental reconstruction of the surface of an owl-
shaped vase from multiple (four) views in accordance with embodiments of the present disclosure. [0041] FIG. 10 illustrates an exemplary application of mapping textures onto a
reconstructed surface in accordance with embodiments of the present disclosure. [0042] FIG. 11 illustrates contours of aspect ratios of quadrilaterals in a reconstructed
surface in accordance with embodiments of the present disclosure. [0043] FIGs. 12A-B illustrate an exemplary reconstruction of a surface as a single patch and as a multi-patch basis spline CAD surface, in accordance with embodiments of the present disclosure. [0044] FIGs. 13A-B illustrate an exemplary calculation of mean curvatures and
principal directions for surface analysis, for a surface reconstructed in accordance with
embodiments of the present disclosure. [0045] FIG. 14 is a flow diagram illustrating a method to facilitate providing an
automatic digitization of an object in accordance with embodiments of the present disclosure. [0046] FIG. 15 illustrates an exemplary computer system to implement the proposed
system in accordance with embodiments of the present disclosure.
DETAILED DESCRIPTION [0047] In the following description, numerous specific details are set forth in order to
provide a thorough understanding of embodiments of the present invention. It will be apparent to one skilled in the art that embodiments of the present invention may be practiced
without some of these specific details. [0048] Embodiments of the present invention include various steps, which will be described below. The steps may be performed by hardware components or may be embodied
8
in machine-executable instructions, which may be used to cause a general-purpose or special-
purpose processor programmed with the instructions to perform the steps. Alternatively, steps may be performed by a combination of hardware, software, firmware and/or by human
operators. [0049] Embodiments of the present invention may be provided as a computer
program product, which may include a machine-readable storage medium tangibly
embodying thereon instructions, which may be used to program a computer (or other
computing devices) to perform a process. The machine-readable medium may include, but is not limited to, fixed (hard) drives, magnetic tape, floppy diskettes, optical disks, compact disc read-only memories (CD-ROMs), and magneto-optical disks, semiconductor memories, such
as ROMs, PROMs, random access memories (RAMs), programmable read-only memories (PROMs), erasable PROMs (EPROMs), electrically erasable PROMs (EEPROMs), flash
memory, magnetic or optical cards, or other type of media/machine-readable medium suitable for storing electronic instructions (e.g., computer programming code, such as software or
firmware). [0050] Various methods described herein may be practiced by combining one or more machine-readable storage media containing the code according to the present invention with
appropriate standard computer hardware to execute the code contained therein. An apparatus for practicing various embodiments of the present invention may involve one or more computers (or one or more processors within a single computer) and storage systems containing or having network access to computer program(s) coded in accordance with
various methods described herein, and the method steps of the invention could be accomplished by modules, routines, subroutines, or subparts of a computer program product. [0051] If the specification states a component or feature “may”, “can”, “could”, or
“might” be included or have a characteristic, that particular component or feature is not required to be included or have the characteristic. [0052] As used in the description herein and throughout the claims that follow, the meaning of “a,” “an,” and “the” includes plural reference unless the context clearly dictates otherwise. Also, as used in the description herein, the meaning of “in” includes “in” and “on” unless the context clearly dictates otherwise. [0053] Exemplary embodiments will now be described more fully hereinafter with
reference to the accompanying drawings, in which exemplary embodiments are shown. This invention may, however, be embodied in many different forms and should not be construed
as limited to the embodiments set forth herein. These embodiments are provided so that this
9
invention will be thorough and complete and will fully convey the scope of the invention to
those of ordinary skill in the art. Moreover, all statements herein reciting embodiments of the invention, as well as specific examples thereof, are intended to encompass both structural and
functional equivalents thereof. Additionally, it is intended that such equivalents include both
currently known equivalents as well as equivalents developed in the future (i.e., any elements developed that perform the same function, regardless of structure). [00054] While embodiments of the present invention have been illustrated and
described, it will be clear that the invention is not limited to these embodiments only.
Numerous modifications, changes, variations, substitutions, and equivalents will be apparent to those skilled in the art, without departing from the scope of the invention, as described in
the claim. [0055] The present disclosure relates generally to systems and methods for automatic digitization and surface reconstruction for objects using computer vision techniques.
[0056] An aspect of the present disclosure pertains to a system to facilitate providing
an automatic digitization of an object, said system comprising: a pair of cameras; an image projector; a set of predefined images for illumination, where upon the object being placed
within a field of view of the image projector and the pair of cameras, sequentially projecting
each of the set of predefined images on the object using the image projector, and capturing
images of the object thus illuminated with the pair of cameras; and a computing device associated with a processor, where a set of data packets being received at the processor are used to automatically determine a digitization of a portion of the object falling within the field of view of the pair of cameras and the image projector, and a correspondence between a subset of pixels in a first camera with a subset of pixels in a second camera of the pair of
cameras is determined independently of a set of calibration parameters of the pair of cameras. [0057] According to an embodiment, the set of data packets pertain to the captured
images of the object, the set of predefined images for illumination, and a set of calibration
parameters associated with the pair of cameras. [0058] According to an embodiment, the set of predefined images for illumination are embedded with hierarchically refined binary-encoded patterns oriented along coordinate lines of a general curvilinear planar coordinate system. [0059] According to an embodiment, upon receiving the set of data packets, the computing device automatically determines a reconstruction of the portion of the object that falls within the field of view of the image projector and the pair of cameras, where the
10
portion of the object is reconstructed as a three-dimensional surface that is parameterized
over a planar region. [0060] According to an embodiment, the three-dimensional surface that is parameterized over a planar region is alternately represented either as a three-dimensional quadrangulation that is parameterized over the planar region, or as a three-dimensional triangulation parameterized over the planar region. [0061] According to an embodiment, the object has any of a three dimensional surface or a two dimensional surface. [0062] According to an embodiment, a projection of the set of predefined images embedded with a greater hierarchical-depth of binary-encoded patterns facilitates in
reconstruction of the object with higher resolution of details. [0063] According to an embodiment, accuracy of a plurality of reconstructed points sampling the reconstructed portion of the object is maintained irrespective of the hierarchical-
depth of the binary-encoded patterns embedded in a set of projected images. [0064] According to an embodiment, when the object is reconstructed from a plurality of orientations, a representation for exposed portions of the object is produced as an
atlas of charts. [0065] An aspect of the present disclosure pertains to a method to facilitate providing
an automatic digitization of an object, said method comprising: sequentially projecting each
of a set of predefined images on the object using an image projector, and capturing images of
the object thus illuminated with a pair of cameras; and receiving, at a processor associated
with a computing device, a set of data packets to automatically determine a digitization of a portion of the object falling within the field of view of the pair of cameras and the image projector, and a correspondence between a subset of pixels in a first camera with a subset of
pixels in a second camera of the pair of cameras is determined independently of a set of
calibration parameters of the pair of cameras.
IMAGING SETUP AND ILLUMINATION PATTERNS [0066] FIG. 1 illustrates exemplary functional components 100 of a structured light imaging system in accordance with an embodiment of the present disclosure. [0067] In an embodiment, as illustrated in FIG. 1 is an imaging setup, which consists of two calibrated digital cameras e.g., left camera 102, and right camera 108, and an un-
calibrated digital light projector 106. The reconstructed part of the scene 112 lies at the intersection of the region illuminated by the projector and the regions falling within the fields
?, and to resolve the correspondence problem, a pair of set-valued maps that assign code words and pixel correspondences to
graph vertices/edges are provided. For the purpose of subsequent explanations, it suffices to
consider images from just one of the cameras in the setup, which also simplifies the notation
required:
U$: ??-th projected image of binary encoded stripes (e.g., black and white stripes) oriented
along the first coordinate direction, with1 = ?? = ??.
V+: ??-th projected image of binary encoded stripes (e.g., black and white stripes) oriented
along the second coordinate direction, with 1 = ?? = ??.
I$: Grayscale camera image with the scene illuminated by the pattern U$.
J+: Grayscale camera image with the scene illuminated by the pattern V+
$}$ ? {J+}+ and the (?? +
??) illumination patterns {U$}$ ? {V+}+are considered to be given. Pixel intensities in all these images lie in the range [0; 255], with I(??, ??) denoting the intensity at the pixel (??, ??) in an
image I. Pixel intensities in the images {U$}$ ? {V+}+ are either 0 or 255, corresponding to
11
of view of the two cameras 102 and 108. The scene 112 is sequentially illuminated with
images consisting of hierarchically subdivided black and white stripes 104, and is photographed by the two cameras. The projected illuminations determine a coordinate system imposed on the scene 112. For instance, projecting images with horizontal and vertical stripes 104 yields a parameterization over a two-dimensional Cartesian coordinate system. [0068] In an embodiment, the most refined patterns contain roughly 256 pairs of
alternating stripes. Further, refinement of these stripes yields a denser sampling of the scene,
but is limited in practice by the projector’s resolution and contrast ratio. In an embodiment,
even one camera can suffice for imaging because the projector, if calibrated, can be treated as an inverse camera. Nevertheless, setup shown is preferred for its simplicity and because it permits coupling the projector with optical lenses for imaging objects of various sizes without concerns about the distortions they cause in the illumination patterns. CONCEPTUALIZATION OF IMAGES WITH ENCODED ILLUMINATION AS GRAPHS
[0069] In an embodiment, a graph G of code words is constructed that is an
abstraction of the encoding induced by the hierarchical sequence of projected illuminations.
For the purpose of using G, and in particular its dual G
.
[0070] In the following, the set of (??+??) camera images {I
black and to white pixels, respectively. Without loss of generality, it is assumed that the images {U$}$ and {V+}+ are ordered according to increasing stripe refinement.
$}$ ? {V+}+ are either 0 or 255, the specific definition of ?? is
inconsequential for them. This, however, is not the case for camera images {I$}$ ? {J+}+,
$}$ ? {V+}+, the codes
??(??, ??) ? ??(UI(??, ??)), ??(UJ(??, ??)),?, ??(UL(??, ??)) , (1??)
??(??, ??) ? ??(VI(??, ??)), ??(VJ(??, ??)),?, ??(VO(??, ??)) , (1??)
?? ? ?
(R,S)
{(??, ??) : ?? = (??(UI(??, ??),…, ??(UL(??, ??))
?? = (??(VI(??, ??),…, ??(VO(??, ??))}. (2)
Further, introduce the undirected graph G with vertex set ??. To distinguish (??, ??)-codes from graph vertices, label the vertex representing the code (??, ??) by ??(??, ??). Interpreting the codes ?? ± 1 and ?? ± 1 as addition/subtraction of ?? and ?? bit binary numbers, edges in G connect each vertex ??(??, ??) to vertices ??(?? - 1, ??), ??(?? + 1, ??), ??(??, ?? + 1) and ??(??, ?? - 1). Then,
label the edge between vertices ??I and ??J as ??{??I, ??J}. That G is a simple and planar graph is 12
Grayscale to binary conversion [0071] In an embodiment, it is convenient to introduce a ternary-valued map
??:[0,255] ?{0,1/2,1} to serve as a filter for grayscale to binary conversion by assigning 0
to dark pixels, 1 to bright pixels and 1/2 to gray pixels with indiscernible brightness. Since pixel intensities in {U
whose gray pixels can be attributed to noise, ambient lighting, surface texture and non sharp
transitions in the projected stripes. A simple choice for b is as a thresholding operation that maps intensities below 128 - ?? to 0, intensities above 128 + ?? to 1 and intensities in the interval [128 - ??, 128 + ??] to 1/2. Hence the choice of 0

Documents

Application Documents

# Name Date
1 202041027369-STATEMENT OF UNDERTAKING (FORM 3) [27-06-2020(online)].pdf 2020-06-27
2 202041027369-REQUEST FOR EXAMINATION (FORM-18) [27-06-2020(online)].pdf 2020-06-27
3 202041027369-FORM 18 [27-06-2020(online)].pdf 2020-06-27
4 202041027369-FORM 1 [27-06-2020(online)].pdf 2020-06-27
5 202041027369-DRAWINGS [27-06-2020(online)].pdf 2020-06-27
6 202041027369-DECLARATION OF INVENTORSHIP (FORM 5) [27-06-2020(online)].pdf 2020-06-27
7 202041027369-COMPLETE SPECIFICATION [27-06-2020(online)].pdf 2020-06-27
8 202041027369-FORM-26 [01-09-2020(online)].pdf 2020-09-01
9 202041027369-Proof of Right [27-11-2020(online)].pdf 2020-11-27
10 202041027369-FER.pdf 2022-01-19
11 202041027369-FORM-26 [19-07-2022(online)].pdf 2022-07-19
12 202041027369-FER_SER_REPLY [19-07-2022(online)].pdf 2022-07-19
13 202041027369-CORRESPONDENCE [19-07-2022(online)].pdf 2022-07-19
14 202041027369-CLAIMS [19-07-2022(online)].pdf 2022-07-19
15 202041027369-ABSTRACT [19-07-2022(online)].pdf 2022-07-19
16 202041027369-Correspondence_Proof of Right (Form1), Power of Attorney_26-07-2022.pdf 2022-07-26
17 202041027369-PatentCertificate07-12-2023.pdf 2023-12-07
18 202041027369-IntimationOfGrant07-12-2023.pdf 2023-12-07
19 202041027369-OTHERS [03-01-2024(online)].pdf 2024-01-03
20 202041027369-EDUCATIONAL INSTITUTION(S) [03-01-2024(online)].pdf 2024-01-03

Search Strategy

1 3d_imagingE_13-01-2022.pdf

ERegister / Renewals

3rd: 03 Jan 2024

From 27/06/2022 - To 27/06/2023

4th: 03 Jan 2024

From 27/06/2023 - To 27/06/2024

5th: 03 Jan 2024

From 27/06/2024 - To 27/06/2025

6th: 03 Jan 2024

From 27/06/2025 - To 27/06/2026

7th: 03 Jan 2024

From 27/06/2026 - To 27/06/2027

8th: 03 Jan 2024

From 27/06/2027 - To 27/06/2028

9th: 03 Jan 2024

From 27/06/2028 - To 27/06/2029

10th: 03 Jan 2024

From 27/06/2029 - To 27/06/2030