Sign In to Follow Application
View All Documents & Correspondence

A Method For Determining Depth Of An Object In A Three Dimensional (3 D) Space From A View Point

Abstract: A METHOD FOR DETERMINING DEPTH OF AN OBJECT IN A THREE DIMENSIONAL (3D) SPACE FROM A VIEW POINT A method for determining depth 301 of an object 302 in a three dimensional space from a view point 304, wherein the method comprising of: placing a camera 303 on a plane to capture a live video of the object 302; processing the captured video by a graphical user interface 104 to display a regular video and to obtain and display an edge detected video; placing a laser sensor 305 on the same plane; passing a laser light on object 302 to obtain a laser spot 306 on object 302; obtaining the laser spot 306 on the displayed regular video 313 and a laser blob 307 on the displayed edge detected video 314; performing a mouse-click action at centre of laser blob 307 and thereby obtaining co-ordinates 308 at laser spot 306 of the object 302 and obtaining depth 301 using the obtained co-ordinates 308 and view point co-ordinates 309. Fig. 6.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
05 November 2018
Publication Number
19/2020
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
sunita@skslaw.org
Parent Application

Applicants

Amrita Vishwa Vidyapeetham
Amritapuri Campus, Kollam690525, Kerala

Inventors

1. Dr. Rajesh Kannan Megalingam
South Block (Vyasa Bhavan), Mata Amritanandamayi Math, Kollam, Kerala - 690525
2. Sriteja Gone
H.No. : 2-8-684, Bhavani Nagar Colony, Hanamkonda, Warangal, Telangana -506001
3. Ashwin Kashyap Nellutla
H.No 9-7-165/2, Srinilayam, Bommidi Street, Khammam, Telangana - 507001
4. Vamsy Vivek Gedela
1-32/10, Old Aganampudi, Visakhapatnam, Andhra Pradesh - 530046
5. Koppaka Ganesh Sai Apuroop
D.no. 6-64-16/3, SRAMIKANAGAR, GAJUWAKA, VISAKHAPATNAM, Andhra Pradesh - 530026
6. Shiva Bandyopadhyay
SHIVAM TC 2/61(1) Near FCI, KazhakuttomP.O, Thiruvananthapuram, Kerala - 695582
7. Dr. Venkat Rangan
Amrita Vishwa Vidyapeetham Amrita Nagar Ettimadai PO 641105 Coimbatore, Tamil Nadu

Specification

A METHOD FOR DETERMINING DEPTH OF AN OBJECT IN A THREE DIMENSIONAL (3D) SPACE FROM A VIEW POINT
FIELD OF INVENTION
[0001] The embodiment herein generally relates to depth estimation, and more
particularly, to a method for determining depth of an object in a three dimensional
(3D) space from a view point.
BACKGROUND AND PRIOR ART
[0002] Depth determination refers to a technique aiming to obtain the distance
of a point or an object in three dimensional spaces from a view point.
Conventionally, the depth determination techniques use complex image
processing methods which are dependent on various factors like luminance, size
of the object to which the depth is to be measured, material of the object, focal
length of the camera etc.
[0003] Therefore, there is a need to develop a method for determining depth of
an object in a three dimensional (3D) space from a view point that is independent
of size and material of the object, camera parameters and lighting conditions.
Further, there is still a need to develop a simple and computationally fast method
for determining depth of the object in the three dimensional (3D) space from the
view point.
OBJECTS OF THE INVENTION
[0004] Some of the objects of the present disclosure are described herein below:
[0005] A main object of the present invention is to provide a method for
determining depth of an object in a three dimensional (3D) space from a view
point.

[0006] Another object of the present invention is to provide a method for
determining depth of the object from the view point irrespective of size of the
object and material of the object.
[0007] Still another object of the present invention is to provide a method for
determining depth of the object from the view point independent of camera
parameters such as focal length.
[0008] Yet another object of the present invention is to provide a method for
determining depth of the object from the view point that is computationally fast.
[0009] Another object of the present invention is to provide a method for
determining depth of the object from the view point that can be implemented
under any lighting conditions.
[00010] The other objects and advantages of the present invention will be
apparent from the following description when read in conjunction with the
accompanying drawings, which are incorporated for illustration of preferred
embodiments of the present invention and are not intended to limit the scope
thereof.
SUMMARY OF THE INVENTION
[00011] In view of the foregoing, an embodiment herein provides a method for determining depth of an object in a three dimensional (3D) space from a view point. The method for determining depth of the object in a three dimensional (3D) space from a view point, wherein the method comprising of: placing a camera on a plane to capture a live video of the object, processing the captured video by a graphical user interface (GUI) to display the captured video as a regular video with continuous updation of image frames for monitoring and further to obtain

and display an edge detected video for determining the depth, placing a laser sensor on the same plane where the camera is mounted, passing a laser light on the object using the laser sensor to obtain a laser spot on the object while capturing the video by the camera and thereby obtaining the laser spot on the displayed regular video and a laser blob on the displayed edge detected video, performing a mouse-click action at centre of the laser blob obtained on the edge detected video and thereby obtaining co-ordinates at the laser spot of the object through a function running in the GUI, obtaining distance between the laser spot and the view point using the obtained co-ordinates and view point co-ordinates to determine the depth of the object in the 3D space from the view point. [00012] The method for determining depth of the object in the three dimensional (3D) space from the view point further includes: obtaining an angle between the optical axis and an optical ray from the camera to the laser spot using obtained distance between the laser spot and the view point, radians per pixel and radians offset, and thereby determining depth of the object in the three dimensional (3D) space from the view point using obtained angle and distance between the camera and the laser sensor placed on the same plane. According to an embodiment, the view point is an optical axis of the camera. According to an embodiment, the regular video and the edge detected video displayed in separate windows of the GUI.
[00013] According to an embodiment, the camera is placed on the plane of an autonomous mobile robot or autonomous robotic arm to capture live video of the object. According to an embodiment, the GUI utilizes canny edge detection method with sensitive threshold value to obtain edge detected video. According to an embodiment, the GUI includes button widget provided to toggle between the

regular video and the edge detected video. According to an embodiment, the
determined depth is the distance between the view point and the plane of the laser
spot.
[00014] According to an embodiment, the captured video is of 640 x 480 pixels in
dimension. According to an embodiment, the mouse-click action provides the
output pixel location of where the click was performed. According to an
embodiment, the captured video runs on 60 frames per second (fps) speed.
According to an embodiment, the laser sensor used is red laser having wavelength
600nm.
[00015] These and other aspects of the embodiments herein will be better
appreciated and understood when considered in conjunction with the following
description and the accompanying drawings. It should be understood, however,
that the following descriptions, while indicating preferred embodiments and
numerous specific details thereof, are given by way of illustration and not of
limitation. Many changes and modifications may be made within the scope of the
embodiments herein without departing from the spirit thereof, and the
embodiments herein include all such modifications.
BRIEF DESCRIPTION OF DRAWINGS
[00016] The detailed description is set forth with reference to the accompanying
figures. In the figures, the left-most digit(s) of a reference number identifies the
figure in which the reference number first appears. The use of the same reference
numbers in different figures indicates similar or identical items.
[00017] Fig. 1 illustrates a system architecture of mouse-click method, according
to an embodiment herein;

[00018] Fig.2 illustrates a design of a Tkinter graphical user interface (GUI),
according to an embodiment herein;
[00019] Fig.3 illustrates a setup of a camera and a laser sensor for determining
depth of the of the object in a three dimensional (3D) space from a view point,
according to an embodiment herein;
[00020] Fig.4 illustrates a graphical representation of the laser spot in a video
frame for determining the depth of the object from the view point, according to an
embodiment herein;
[00021] Fig.5 illustrates a pictorial representation of a regular video with a laser
spot and an edge detected video with a blob, according to an embodiment herein;
and
[00022] Fig.6 illustrates a flow chart showing a method for determining depth of
the object in the three dimensional (3D) space from the view point, according to
an embodiment herein.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS [00023] The embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments and detailed in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.

[00024] As mentioned above, there is a need to develop a simple and computationally fast method for determining depth of an object in a three dimensional (3D) space from a view point that is independent of size and material of the object, camera parameters and lighting conditions. The embodiments herein achieve this by providing a method for determining depth of the object in the three dimensional (3D) space from the view point that is independent of size and material of the object, camera parameters and lighting conditions. Referring now to the drawings, and more particularly to FIGS. 1 through 6, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments.
[00025] Fig.l illustrates system architecture 100 of mouse-click method, according to an embodiment herein. According to an embodiment, the system architecture 100 comprises of a camera, a laser sensor and Tkinter based graphical user interface (GUI). According to an embodiment, the camera which is a primary sensor is mounted on an autonomous mobile robot or an autonomous robotic arm to capture live video of the object. The captured video is input feed from the camera 101 to Tkinter based Graphical user interface 104. According to an embodiment, the Tkinter based Graphical User Interface (GUI) 104 is configured for processing the captured video. The Tkinter based Graphical User Interface 104 includes two windows, wherein one window provided for displaying a regular video and the other window for displaying an edge detected video. The edge detected video is obtained after processing the captured video by the GUI. The Regular video is used for monitoring. The edge detected video is used for determining depth.

[00026] The live video feed is displayed as the regular video on one window of the GUI. According to an embodiment, the laser is placed on the same plane where the camera is placed and provided for passing a laser light on the object to obtain a laser spot on the object and thereby obtaining the laser spot on the displayed regular video and a laser blob on the displayed edge detected video. According to an embodiment, the mouse-click libraries 105 of the GUI help in performing the mouse-click action on centre of the blob displayed in the edge detected video. Upon a click of the mouse on the centre of the blob created by the laser in the edge detected video, the function gives a 2D coordinate (x, y) from the image frame. The coordinate (x, y) are used in successive iterations to obtain a mathematical relationship between the depth and the (x, y) coordinate to find the 3D coordinate (x, y, z) 106, z being the depth. Thus, the depth of the object in the 3D space from the view point is determined. The process runs on Tkinter Graphical User Interface (GUI).
[00027] Fig.2 illustrates a design of a Tkinter graphical user interface (GUI) 104, according to an embodiment herein. The Tkinter GUI 104 in Python is one of the native GUIs in Python. According to an embodiment, the Tkinter graphical user interface (GUI) 104 comprises of edge detection function 103, mouse click event/ mouse click function 105, button widget 202, label widget 203, event bindings 204 and root widget 205. The video captured by the camera is communicated to the GUI using the 201 Open CV (Open Source Computer Vision Library). [00028] The root widget 205 creates an empty workspace in the window of the GUI 104. The contents of the workspace are customized according to the need of the user. The work space with a display screen of resolution 640x480 pixels (px) for the video is created through label widget 203. Two button widgets 202

provided to toggle between the regular video and the edge detected video. The label widget 203 is used for the captured video. The button widgets 202, when clicked, call the function inside the code and execute it. All the contents are packed inside the label widget 203. The captured video is displayed with the help of Open CV 201, where in, the camera captures an image every millisecond and the image frame is updated for every millisecond inside the label widget 203 which turns out to be a 60 frames-per-second (fps) video feed. [00029] The event bindings 204 are the functions associated with Tkinter libraries. They bind the occurrences to the GUI 1104. One such event is the mouse-click function 105.
[00030] The image frame is updated every millisecond is for the need to introduce edge detection in the video. The GUI programming reduces the complexity in converting a normal video to edge detected video. The program is divided into two parts. In the first part, the regular video is transmitted and displayed by continuous updation of image frames every millisecond. The second part involves logic for edge detection 103 where in canny edge detection method with sensitive threshold value is used. The edge detected video is only involved for depth determination.
[00031] Fig.3 illustrates a setup of a camera 303 and a laser sensor 305 for determining depth 301 of the object 302 in a three dimensional (3D) space from a view point 304, according to an embodiment herein. The camera 303 and the laser sensor 305 are mounted on the same plane. 'D' is the distance to be measured from the object 302 to the camera 303. 'd' is the distance measure between the camera 303 and the laser sensor 305, which is fixed. The camera 303 captures live video of the object 302 and communicates the captured video to the GUI 104. The

GUI 104 provided for processing the captured video to display the captured video as a regular video with continuous updation of image frames for monitoring and further to obtain and display an edge detected video for depth estimation. The regular video and the edge detected video displayed in separate windows of the GUI 104.
[00032] The laser sensor 305 is used to project a laser spot 306 on the object 302 while capturing the video by the camera 303 and thereby obtaining the laser spot
306 on the displayed regular video 313 and a laser blob 307 on the displayed edge detected video 314. The laser spot 306 appears as a dispersed circular laser blob
307 in the displayed edge detected video 314. The center pixel (view point) 304 lies on the optical axis of the camera 303. The mouse-click action is performed at centre of the laser blob 307 obtained on the edge detected video 314 and thereby obtaining co-ordinates 308 at the laser spot 306 of the object 302 through a function running in the GUI 104. The distance from the view point 304 to the center of the laser spot 306 is 'dpc'. The distance 'dpc' 310 between centre of the laser spot 306 and the view point 304 is obtained using the obtained co-ordinates
308 and view point co-ordinates 309 to determine the depth 301 of the object 302 in the 3D space from the view point 304.
[00033] The angle between the optical axis 304 and an optical ray from the camera 303 to the laser spot 306 is (6). The angle (6) 311 between the optical axis 304 and an optical ray from the camera 303 to the laser spot 306 is obtained using obtained distance 310 between the centre of the laser spot 306 and the view point 304, radians per pixel and radians offset. The depth 301 of the object 302 in the three dimensional (3D) space from the view point 304 is determined using

obtained angle 311 and distance 'd' 312 between the camera 303 and the laser sensor 305 placed on the same plane.
[00034] Fig.4 illustrates a graphical representation of the laser spot in a video frame for determining the depth of the object from the view point, according to an embodiment herein. Along the x-axis is width of the video frame and along y-axis is the height of the video frame, which depends on the resolution of the video. The point C(x0, yo) is the center point of the video frame, and the point L(xi, yi) is the point where the laser beam is incident. Xi and Xu are the lower and upper boundaries respectively along the x-axis. Yi and Yu are the lower and upper boundaries respectively along the y-axis. The dpc is calculated using (1),
dpc=V((xi-xo)2+(yi-yo)2) (1)
[00035] The center point of the video frame is taken as the origin of the coordinate system. Captured video is of the resolution 640 x 480 pixels. The range of values along the x-axis and y-axis are -320 to +320 and -240 to +240 respectively. When the mouse-click event occurs at the laser blob in the video frame, the algorithm fetches the position values as the x-intercept and y-intercept. The center point of the video frame C(x0,yo) is taken as the origin, dpc is calculated using (2),
dpc=V(x2+y2) (2)
From Fig.3,
D = d / Tan(0) (3)
Wherein, D in (3) is the depth of the object 302 from the view point 304,
0 = rpc * dpC + ro (4)

Where rpc is radians per pixel and ro is radian offset.
[00036] Fig.5 illustrates a pictorial representation of a regular video with a laser spot and an edge detected video with a blob, according to an embodiment herein. When the mouse-click Tkinter program is executed, the Tkinter window displays the regular video feed, which is used for monitoring. Upon the click of the button widget, the logic shifts to the edge detection, where in, every frame that is displayed goes through the process of canny edge detection. This does not create any delay in the video feed as the frames are continuously updated for every millisecond. The red spot, though a minute one, spreads on the object while video capture and appears as a laser blob. Hence, for the accurate measurement of the pixel value of the centre of the laser light, edge detection is applied on the individual image frames in the video. Canny edge detection applied with hue-saturation-value (HSV) boundaries for the red colour is used for edge detection of every frame. According to an embodiment, the video runs on 60fps speed and a 5 megapixel (5MP) USB Camera is used. The camera captures the video in 640 x 480 pixels resolution.
[00037] The centre of the image frame is taken as the origin. The centre of the blob which appears to be a near-circle due to the spreading of the laser light is taken as the (x, y) coordinate. The entire processing is displayed in the GUI built on Tkinter. In the page with the edge detected video, the mouse-click action is performed on the blob created by the laser spot. According to an embodiment, the laser sensor 305 used is red laser of wavelength 600 nm. The coordinates (x,y) give us the position of the spot in the plane from the camera reference origin, and

are used to obtain the depth from the camera. The depth information obtained is
the distance between the camera lens and the plane of the laser spot.
[00038] Fig.6 illustrates a flow chart showing a method for determining depth of
the object in the three dimensional (3D) space from the view point, according to
an embodiment herein.
[00039] A method for determining depth of the object in the three dimensional
(3D) space from the view point; wherein said method comprising of:
[00040] at block 601, placing a camera on a plane to capture a live video of the
object, wherein the view point is an optical axis of the camera;
[00041] at block 602, processing the captured video by a graphical user interface
(GUI) to display the captured video as a regular video with continuous updation
of image frames for monitoring and further to obtain and display an edge detected
video for depth estimation, wherein the regular video and the edge detected video
displayed in separate windows of the GU;
[00042] at block 603, placing a laser sensor on the same plane where the camera
is mounted;
[00043] at block 604, passing a laser light on the object using the laser sensor to
obtain a laser spot on the object while capturing the video by the camera and
thereby obtaining the laser spot on the displayed regular video and a laser blob on
the displayed edge detected video;
[00044] at block 605, performing a mouse-click action at centre of the laser blob
created on the edge detected video and thereby obtaining co-ordinates at the laser
spot of the object through a function running in the GUI;
[00045] at block 606, obtaining distance between the laser spot and the view
point using the obtained co-ordinates and view point co-ordinates;

[00046] at block 607, obtaining an angle between the optical axis and an optical
ray from the camera to the laser spot using obtained distance between the laser
spot and the reference point, radians per pixel and radians offset;
[00047] at block 608, determining depth of the object in the three dimensional
(3D) space from the view point using obtained angle and distance between the
camera and the laser sensor placed on the same plane.
[00048] A main advantage of the present invention is that the provided method is
simple in terms of determining depth of the object in the (3D) space from the view
point.
[00049] Another advantage of the present invention is that the method allows
depth determination of the object from the view point irrespective of size of the
object and material of the object.
[00050] Still another advantage of the present invention is that the provided
method allows depth determination of the object from the view point independent
of camera parameters such as focal length.
[00051] Yet another advantage of the present invention is that the provided
method is computationally fast.
[00052] Another advantage of the present invention is that the provided method
can be implemented under any lighting conditions.
[00053] Another advantage of the present invention is that the proposed method
avoids very complex image processing algorithms.
[00054] The foregoing description of the specific embodiments will so fully
reveal the general nature of the embodiments herein that others can, by applying
current knowledge, readily modify and/or adapt for various applications such
specific embodiments without departing from the generic concept, and, therefore,

such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the embodiments as described herein.

We claim:
1. A method for determining depth 301 of an object 302 in a three dimensional (3D) space from a view point 304, wherein the method comprising of:
placing a camera 303 on a plane to capture a live video of the object 302, wherein the view point 304 is an optical axis of the camera 303; processing the captured video by a graphical user interface (GUI) 104 to display the captured video as a regular video with continuous updation of image frames for monitoring and further to obtain and display an edge detected video for depth estimation, wherein the regular video and the edge detected video displayed in separate windows of the GUI 104; placing a laser sensor 305 on the same plane where the camera 303 is mounted;
passing a laser light on the object 302 using the laser sensor 305 to obtain a laser spot 306 on the object 302 while capturing the video by the camera 303 and thereby obtaining the laser spot 306 on the displayed regular video 313 and a laser blob 307 on the displayed edge detected video 314; performing a mouse-click action at centre of the laser blob 307 obtained on the edge detected video 314 and thereby obtaining co-ordinates 308 at the laser spot 306 of the object 302 through a function running in the GUI 104;
obtaining distance 310 between the laser spot 306 and the view point 304 using the obtained co-ordinates 308 and view point co-ordinates 309 to determine the depth 301 of the object 302 in the 3D space from the view point 304.

2. The method as claimed in claim 1, wherein the method for determining
depth 301 of the object 302 in the three dimensional (3D) space from the
view point 304 further includes:
obtaining an angle 311 between the optical axis 304 and an optical ray from the camera 303 to the laser spot 306 using obtained distance 310 between the laser spot 306 and the view point 304, radians per pixel and radians offset; and thereby determining depth 301 of the object 302 in the three dimensional (3D) space from the view point 304 using obtained angle 311 and distance 312 between the camera 303 and the laser sensor 305 placed on the same plane.
3. The method as claimed in claim 2, wherein the camera 303 placed on the plane of an autonomous mobile robot or autonomous robotic arm to capture live video of the object 302.
4. The method as claimed in claim 3, wherein the GUI 104 utilizes canny edge detection method 103 with sensitive threshold value to obtain edge detected video 314.
5. The method as claimed in claim 4, wherein the GUI 104 includes button widget 202 provided to toggle between the regular video 313 and the edge detected video 314.
6. The method as claimed in claim 2, wherein the determined depth 301 is the distance between the view point 304 and the plane of the laser spot 306.
7. The method as claimed in claim 3, wherein the captured video is of 640 x 480 pixels in dimension.

8. The method as claimed in claim 1, wherein the mouse-click action provides the output pixel location of where the click was performed.
9. The method as claimed in claim 7, wherein the captured video runs on 60 frames per second (fps) speed.
10. The method as claimed in claim 1, wherein the laser sensor 305 used is
red laser having wavelength 600nm.

Documents

Application Documents

# Name Date
1 201841041778-EDUCATIONAL INSTITUTION(S) [05-01-2022(online)].pdf 2022-01-05
1 201841041778-STATEMENT OF UNDERTAKING (FORM 3) [05-11-2018(online)].pdf 2018-11-05
2 201841041778-AMENDED DOCUMENTS [01-01-2022(online)].pdf 2022-01-01
2 201841041778-POWER OF AUTHORITY [05-11-2018(online)].pdf 2018-11-05
3 201841041778-FORM 13 [01-01-2022(online)].pdf 2022-01-01
3 201841041778-FORM 1 [05-11-2018(online)].pdf 2018-11-05
4 201841041778-MARKED COPIES OF AMENDEMENTS [01-01-2022(online)].pdf 2022-01-01
4 201841041778-DRAWINGS [05-11-2018(online)].pdf 2018-11-05
5 201841041778-POA [01-01-2022(online)].pdf 2022-01-01
5 201841041778-DECLARATION OF INVENTORSHIP (FORM 5) [05-11-2018(online)].pdf 2018-11-05
6 201841041778-FORM-26 [15-12-2021(online)].pdf 2021-12-15
6 201841041778-COMPLETE SPECIFICATION [05-11-2018(online)].pdf 2018-11-05
7 201841041778-FORM-26 [29-11-2018(online)].pdf 2018-11-29
7 201841041778-FER.pdf 2021-10-17
8 201841041778-FORM 18 [03-08-2020(online)].pdf 2020-08-03
8 201841041778-ENDORSEMENT BY INVENTORS [29-11-2018(online)].pdf 2018-11-29
9 201841041778-Proof of Right [25-07-2020(online)].pdf 2020-07-25
9 Correspondence by Agent_Form5,Power of Attorney_03-12-2018.pdf 2018-12-03
10 201841041778-FORM-26 [14-02-2019(online)].pdf 2019-02-14
10 Correspondence by Agent_Form26_18-02-2019.pdf 2019-02-18
11 201841041778-FORM-26 [14-02-2019(online)].pdf 2019-02-14
11 Correspondence by Agent_Form26_18-02-2019.pdf 2019-02-18
12 201841041778-Proof of Right [25-07-2020(online)].pdf 2020-07-25
12 Correspondence by Agent_Form5,Power of Attorney_03-12-2018.pdf 2018-12-03
13 201841041778-ENDORSEMENT BY INVENTORS [29-11-2018(online)].pdf 2018-11-29
13 201841041778-FORM 18 [03-08-2020(online)].pdf 2020-08-03
14 201841041778-FER.pdf 2021-10-17
14 201841041778-FORM-26 [29-11-2018(online)].pdf 2018-11-29
15 201841041778-COMPLETE SPECIFICATION [05-11-2018(online)].pdf 2018-11-05
15 201841041778-FORM-26 [15-12-2021(online)].pdf 2021-12-15
16 201841041778-DECLARATION OF INVENTORSHIP (FORM 5) [05-11-2018(online)].pdf 2018-11-05
16 201841041778-POA [01-01-2022(online)].pdf 2022-01-01
17 201841041778-DRAWINGS [05-11-2018(online)].pdf 2018-11-05
17 201841041778-MARKED COPIES OF AMENDEMENTS [01-01-2022(online)].pdf 2022-01-01
18 201841041778-FORM 13 [01-01-2022(online)].pdf 2022-01-01
18 201841041778-FORM 1 [05-11-2018(online)].pdf 2018-11-05
19 201841041778-POWER OF AUTHORITY [05-11-2018(online)].pdf 2018-11-05
19 201841041778-AMENDED DOCUMENTS [01-01-2022(online)].pdf 2022-01-01
20 201841041778-STATEMENT OF UNDERTAKING (FORM 3) [05-11-2018(online)].pdf 2018-11-05
20 201841041778-EDUCATIONAL INSTITUTION(S) [05-01-2022(online)].pdf 2022-01-05

Search Strategy

1 SearchStrategyMatrixE_07-09-2021.pdf