Abstract: An augmented reality experience is provided to a user of a hand held device , such as a mobile phone, which incorporates an electronic processor, a camera and a display. In particular , images taken from video footage are displayed in a display of a hand held device together with a live camera view, to create the illusion that the subject of the video - ie the virtual moving image - is present in the field of view of the camera in real time. In this context the term "real world" image means an image taken from reality , such as a physical, real -world scenario using an electronic photo- capture technique , e.g. video recording. A camera (10) of a hand held device is aimed at a well known object (12), which is recognisable to the device. A moving virtual image (14) of an actor playing the part of an historical figure , chosen because of its relevance to the object (12) is displayed.
Augmented Reality Apparatus and Method
The present invention relates to an apparatus and a method
for providing an augmented reality experience, and is
concerned particularly with an apparatus and a method for
providing an augmented reality experience in a hand held
device having a camera.
Augmented reality, in which the viewing of a real world
environment is enhanced using computer generated input, is
becoming available on various platforms, including
television, head up displays, and to a limited extent, hand
held devices such as cell phones.
The use of hand held devices, such as cell phones, as
cameras has been enhanced by the availability of small,
specialised downloadable programs, known informally as
apps . Many of these include computer generated visual
effects that can be combined with a "live view" through the
camera, to provide the user with a degree of augmented
reality for an improved image or amusement. However the
incorporation of video footage into the live view of a
camera has proved to be difficult due to the limited
processing power available in most hand held devices, and
the lack of a functional codebase provided with the builtin
frameworks.
Embodiments of the present invention ai to provide
apparatus and a method for incorporating an apparently
moving image into a live camera view of a hand held device.
The present invention is defined in the attached
independent claims to which reference should now be made.
Further, preferred features may be found in the sub-claims
appended thereto.
According to one aspect of the present invention there is
provided apparatus for displaying an augmented reality on a
display of a hand held device having a camera, the
apparatus comprising a context identification unit for
identifying a context from at least one real image captured
by the device, a virtual image retrieval unit for selecting
and displaying a virtual image in the display and a virtual
image positioning unit for positioning the virtual image in
the display, wherein the apparatus is arranged to display a
virtual image comprising an electronically captured, moving
real world image.
Preferably the virtual image is one that has been
previously stored.
In a preferred arrangement the virtual image comprises a
sequence of still images taken from a moving video.
Alternatively or additionally the virtual image may
comprise a continuous moving video image.
The virtual image may comprise an image of a person or
creature, or could be any other "real world" object or
item.
In a preferred arrangement the context identification unit
is arranged in use to identify a context by comparing at
least one object in a field of view with stored data from a
plurality of objects. The image retrieval unit is
preferably arranged to select an image from a plurality of
stored images according to context information determined
by the context identification unit. The positioning unit is
preferably arranged in use to position the virtual image
according to context information determined by the context
identification unit.
The positioning of the image by the positioning unit may
include sizing of the image in the display, and may include
anchoring the image in the display, with respect to context
information determined by the context identification unit.
The context identification unit, and/or the virtual image
retrieval unit, and/or the virtual image positioning unit
may comprise processes arranged in use to be performed by
one or more electronic processing devices.
The invention also includes a method of displaying an
augmented reality on a display of a hand held device having
a camera, the method comprising identifying a context from
at least one real image captured by the device and
selecting and positioning a virtual image on the display,
wherein the method comprises displaying a virtual image
comprising an electronically captured, moving real world
imag .
Preferably the virtual image is one that has been
previously stored.
In a preferred arrangement, the virtual image comprises a
sequence of still images taken from a moving video.
In a preferred arrangement the method comprises identifying
a context by comparing at least one object in the field of
view with stored data from a plurality of objects. The
method preferably comprises selecting an image from a
plurality of stored images according to determined context
information. The method also preferably comprises
positioning the virtual image according to context
information determined by the context identification unit.
The positioning of the image by the positioning unit may
include sizing of the image in the display and may include
anchoring the image in the display with respect to context
information determined by the context identification unit.
The invention also comprises a program for causing a device
to perform a method of displaying an augmented reality on a
display of a hand held device having a camera, the method
comprising identifying a context from at least one real
image captured by the device and selecting and positioning
a virtual image on the display, wherein the method
comprises displaying a virtual image comprising an
electronically captured, moving real world image.
The program may be contained within an app. The app may
also contain data, such as virtual image data.
The virtual image may comprise a sequence of still images
taken from a moving video.
The invention also comprises a computer program product,
storing, carrying or transmitting thereon or therethrough a
program for causing a device to perform a method of
displaying an augmented reality on a display of a hand held
device having a camera, the method comprising identifying a
context from at least one real image captured by the device
and selecting and positioning a virtual image on the
display, wherein the method comprises displaying a virtual
image comprising an electronically captured, moving real
world image.
The virtual image may comprise a sequence of still images
taken from a moving video.
The present invention may comprise any combination of the
features or limitations referred to herein, except such a
combination of features as are mutually exclusive.
Preferred embodiments of the present invention will now be
described by way of example only with reference to the
accompanying diagrammatic drawings in which:
Figure 1 shows a virtual image superimposed upon a camera
view of a real image, in accordance with a preferred
embodiment of the present invention;
Figure 2 shows schematically a first step in a context
recognition process in accordance with an embodiment of the
present invention;
Figures 3 and 3a show schematically an alternative first
step in a context recognition process, in which there are
multiple visible objects in the camera live view;
Figure 4 shows schematically an animation technique for use
with an embodiment of the present invention;
Figure 5 shows schematically a positioning process
according to an embodiment of the present invention;
Figure 6 shows schematically optional user controls for a
virtual image, according to an embodiment of the present
invention;
Figure 7 shows a first step in an anchoring process for the
image of Figure 6 ;
Figure 8 shows a further step in the anchoring process of
Figure 7;
Figure 9 shows schematically an alternative anchoring
process according to an embodiment of the present
invention;
Figure 10 shows schematically an automatic re-sizing
process for a virtual image, in accordance with an
embodiment of the present invention;
Figure 11 shows schematically an automatic re-sizing
process for a virtual image in an alternative scenario ;
Figures 12 - 15 show schematically different steps in a
process for taking a photograph incorporating both a real
and a virtual image, according to an embodiment of the
present invention;
Figure 16 shows schematically a process for acquiring video
footage incorporating both real and virtual images; and
Figure 17 is a schematic flow diagram showing some key
steps in the process of displaying a virtual image in the
live view of a camera, in accordance with an embodiment of
the present invention.
The embodiment described below aims to provide an augmented
reality experience to a user of a hand held device, such as
a mobile phone, which incorporates an electronic processor,
a camera and a display. In particular, images taken from
video footage are displayed in a display of a hand held
device together with a live camera view, to create the
illusion that the subject of the video - ie the virtual
moving image - is present in the field of view of the
camera in real time.
In this context the term "real world" image means an image
taken from reality, such as a physical, real-world scenario
using an electronic photo-capture technique, e.g. video
recording .
In order to achieve this the device must undertake various
processes, including acquiring contextual information from
the camera view, obtaining an appropriate virtual image,
positioning the virtual image within the camera view,
optionally anchoring the virtual image with respect to the
context and optionally sizing the virtual image within the
camera view.
The processes may be performed by an electronic processor
of the hand held device.
The data necessary for the reconstruction of the virtual
moving image, together with one or more programs for
facilitating the necessary processes for manipulating it to
provide the augmented reality experience, are downloadable
to a hand held device in the form of a specialist program,
or software application, known widely as an ap . The app
can preferably be updated to present the user with fresh
viewing experiences.
The first example described in detail below is that of an
augmented reality system for use as a guide at a visitor
attraction, in which a virtual image of a figure is
displayed within the real world camera view to provide
information, via an associated audio file, about the
attraction.
Turning to Figure 1 , this shows schematically a camera 10
of a hand held device, in this case aimed at a well-known
object 12, which is recognisable to the device, and a
moving virtual image 14 of an actor playing the part of an
historical figure that is chosen because of its relevance
to the object 12. The device recognises the object, in this
case a statue, based upon a unique set of matrix points 12a
which have been stored in the downloaded app in an earlier
stage, and which can provide the device with contextual
information necessary for the subsequent selection, display
and manipulation of the virtual image 14.
Moving virtual images 14 are stored in the device as
sequences of still images taken from a video file, and
synchronised with an appropriate audio file, when the app
is downloaded and the appropriate one is chosen after the
context has been determined.
Turning to Figure 2 , this shows the chosen virtual image 14
as it is displayed in the camera view of the device, beside
the object 12.
Figure 3 shows schematically the scenario in which multiple
objects are detected by the device. In this case the object
12 is detected and so are two further objects 16 and 18.
The device displays all three objects together with
respective virtual buttons superimposed thereon so that the
user may select the object of interest by touching the
appropriate button on the screen, as is shown in Figure 3a.
Figure 4 shows schematically one method for animating a
virtual image. It uses a long established technique of
cutting a moving image into a succession of still frames 20
on a green screen background (not shown) . The device then
plays back the sequence of still images, removing the green
screen background automatically as necessary. As the
individual images are replaced at a rate greater than six
frames per second, the human eye interprets them as a
continuous moving image. A soundtrack, optionally of MP3
format, is played in synchronism with the animation to
reinforce the illusion of continuous video footage. In this
example the animated figure is a Roman soldier, whose
commentary and actions are relevant to the attraction being
viewed through the camera display.
Figure 5 shows schematically a technique for positioning
the image 14 with respect to the object 12. During creation
of a particular app, when the particular scene is first
investigated, a creative director will choose an optimum
placement for the virtual image, based upon a number of
factors, both artistic and practical. Once the optimum
position is chosen the system uses trigonometry to compute
the position of the image at real world spatial coordinates
x , y and z with respect to the object 12. An alternative is
to decide upon a zero point within the object and to
position the image using absolute x,y and/or z coordinates
from the zero point.
Figure 6 shows schematically how the user can re-size or
reposition the image with respect to the object. The image
can be resized using a finger and thumb pinching and
spreading technique 22 whilst touching the screen. The
image can be moved using a drag and drop technique 24, and
an anchoring system (described below) can also be activated
or deactivated by a double finger double tap technique 26.
In a PLAY mode, a virtual PAUSE button 28 is also provided,
which converts to a virtual PLAY button (not shown) in
PAUSE mode.
In order to maintain the illusion that the figure is
actually present beside the attraction, it is necessary
that the position of the figure - ie the image 14 - be
spatially anchored with respect to the object 12. This is
because if the user moves whilst viewing the object and the
virtual image through the camera, an image that is fixed
with respect to the camera screen would quickly fail to
maintain the illusion of reality.
Figure 7 shows schematically an anchoring system according
to one embodiment of the present invention. The system uses
a pre-defined algorithm to seek objects that are either
prominent or else have a definitive shape within the camera
view. Once several objects have been located the system
uses advanced trigonometric techniques to evaluate the
scene displayed in the camera view and to allocate
proportion data to the virtual image. The system then locks
the image in x , y and z coordinates with respect to its
real world context.
Figure 8 shows schematically in more detail the anchoring
system according to the above-described embodiment of the
present invention. Firstly, a label 30 indicates that the
anchor system has been activated. Then the device
dynamically detects the nearest object 32 in the camera
view. In this case, the method used is one in which an
algorithm seeks to recognise objects by detecting a
pattern, rather than using pre-processed matrix points (as
per the example of Figure 1 ) . This allows the algorithm to
look for real world objects to which the performance ie
the virtual image - can be anchored. For example, the
algorithm could recognise the four edges of a snooker
table. This allows an improved anchoring technique as
recognition rules are created that allow the application of
higher or lower thresholds based upon a particular object,
or type of object. One suitable previously considered
algorithm is known as FAST (Features from Accelerated
Segment Test) .
A second object 34 is then detected by the device, to
provide depth information. The image is then anchored to
the first object - ie the position of the image in x , y and
z coordinates with respect to the location of the first
object 32 is determined. The device then checks regularly
to determine whether the object pattern - ie of objects 32
and 34 - has changed, which would occur if the user holding
the device had moved. If the device determines that there
has been movement the device re-scans the field of view and
determines the closest match to the initial pattern of
objects 32 and 34 to ensure that the position of the
virtual image 14 is still true.
The above-described approach allows a user to lock the
anchor to a known object within the display, in almost any
location, efficiently and invisibly. If there is no
specific object from which to take a reference - such as an
open field, for example, then the system reverts firstly to
a pre-loaded recognition library and then if no view is
recognised a digital compass and GPS reference are used to
fix the location of the image in real space.
The use of GPS and digital compass bearing by the anchoring
system is depicted schematically in Figure 9 . This
configuration builds a basic real world map by using GPS
coordinates alongside compass bearings. The GPS coordinates
are used to lock a known longitude and latitude
configuration, whilst the bearings are used to detect 360
degree circular movement by the user. If the system detects
such a movement then the movie is returned to its original
locked position. The animation returns using algorithms
that provide a smooth and quick return to the coordinates
acknowledging dampening and speed of return based on
distance moved.
The apparent size of the image with respect to objects in
the camera view is also important to maintain the illusion
of reality. Figure 10 shows an automatic sizing operation
in which the image 14 is adjusted with respect to the
object 12 when a user, viewing the object through the
camera device, moves either closer to or further away from
the object.
Sophisticated algorithms are employed by the device to
adjust the size of the image smoothly as the user moves
towards or away from the object 12. The autofocus function
of the camera lens may be employed to provide data
concerning a change in the distance from the object. If the
device does not possess an autofocus function then the
distance to the recognised object can be calculated using
stored data about its origin. Both techniques can be used,
where available, to provide a more accurate reading.
Alternatively, the user can manually re-size the image 14
using the pinch technique 22 described earlier.
If a relatively cramped location is detected by the system,
such as an indoor location, or a medieval street scene for
example, the device automatically re-sizes the image to a
larger size so as to maintain realism. Figure 11 depicts
the enlarged image 14 in such a case.
The system also allows the capture of still or video images
bearing both the real view and the virtual image. Figures
12-15 show schematically a process for taking a photograph
with the virtual image 14 included. In Figure 12 a real
person 36 walks into a scene in which the virtual image 14
of the figure is already positioned. In Figure 13 the
photograph is taken and stored and the coordinates of the
virtual image are recorded. In Figure 14 the system postprocesses
the image 14 and the perspective is autodetected.
The composite image is then re-saved. At Figure
15 the user is invited to share the stored composite image
via virtual buttons 3 8 accessing several common media. A
short video sequence can be recorded and shared in a
similar way.
Figure 16 shows schematically an example of a complete
process according to the embodiment described above.
At step 100 the process begins. At step 102 object
recognition rules are read from a database. A t step 104 the
device reads the view and at step 106 it checks for a
recognisable pattern. The device loops until a pattern is
detected. Once a pattern is detected an appropriate moving
image is selected from a library at step 108. At step 110
the image is positioned and play begins. Step 112 awaits a
user input. Options to exit 114, re-size 116, anchor 118 or
reposition 120 are available. If the user selects to exit
the app at step 114 the app is stopped at step 122.
Otherwise the video image continues to play at step 124 .
Figure 17 shows an alternative embodiment in which an
object recognition database 40 is split into several
smaller databases 42, 44, 46, 48 according to user
location. Three or more angles of an object are checked 50
and once the object has been detected the virtual image is
launched 52.
The above examples describe using touch controls, which may
be different to the ones described. However, where the
apparatus supports it, non-contact gestures may be employed
to control the device. Similarly, where the apparatus
supports it, voice commands may be used to control the
apparatus .
The contextual information may be derived from a "real
world" image, as viewed through the camera of the device,
or may be derived from a two-dimensional image, such as a
printed page, photograph or electronically displayed image.
This allows the techniques described above to be used to
enhance a user experience in a wide variety of
circumstances, such as viewing a printed publication or
advertisement. In one embodiment (not shown), the virtual
image can be made to appear to rise or "pop" up from a such
a two dimensional context.
Image processing techniques may be employed to create
virtual shadows for the virtual image, so as to enhance the
perception that the virtual image is a real one. Similarly,
image processing techniques may be employed to balance the
apparent brightness of the virtual image relative to the
real world context being viewed through the device.
Although the examples described above are of a twodimensional
viewing experience, the techniques described
herein may also be applied to an apparent three-dimensional
viewing experience where the apparatus supports this, such
as in 3-D video playback formats.
In the above description, the term "virtual image" is
intended to refer to a previously captured or separately
acquired image - which is preferably a moving image - that
is displayed on a display of the device whilst the user
views the real, or current, image or images being captured
by the camera of the device. The virtual image is itself a
real one, from a different reality, that is effectively cut
out from that other reality and transplanted into another
one the one that the viewer sees in the display of his
device.
Whilst endeavouring in the foregoing specification to draw
attention to those features of the invention believed to be
of particular importance, it should be understood that the
applicant claims protection in respect of any patentable
feature or combination of features referred to herein, and/or
shown in the drawings, whether or not particular emphasis has
been placed thereon.
claim
1.Apparatus for displaying an augmented reality on
a display of a hand held device having a camera,
the apparatus comprising a context identification
unit for identifying a context from at least one
real image captured by the device, a virtual
image retrieval unit for selecting and displaying
a virtual image in the display and a virtual
image positioning unit for positioning the
virtual image in the display, wherein the
apparatus is arranged to display a virtual image
comprising an electronically captured, moving
real world image.
2.Apparatus according to Claim 1 , wherein the
virtual image comprises a sequence of still
images taken from a moving video.
3.Apparatus according to Claim 1 or 2 , wherein the
virtual image comprises a continuous moving video
image .
4.Apparatus according to any of Claims 1-3, wherein
the virtual image comprises an image of a person,
creature or other real-world object or article.
5.Apparatus according to any of the preceding
claims, wherein the context identification unit
is arranged in use to identify a context by
comparing at least one object in a field of view
with stored data from a plurality of objects.
6.Apparatus according to any of the preceding
claims, wherein the image retrieval unit is
arranged in use to select an image from a
plurality of stored images according to context
information determined by the context
identification unit.
7.Apparatus according to any of the preceding
claims, wherein the positioning unit is arranged
in use to position the virtual image according to
context information determined by the context
identification unit.
8.Apparatus according to any of the preceding
claims, wherein the positioning of the image by
the positioning unit includes sizing of the image
in the display.
9.Apparatus according to any of the preceding
claims, wherein the positioning of the image by
the positioning unit includes anchoring the image
in the display, with respect to context
information determined by the context
identification unit.
10.A method of displaying an augmented reality
on a display of a hand held device having a
camera, the method comprising identifying a
context from at least one real image captured by
the device and selecting and positioning a
virtual image on the display, wherein the method
comprises displaying a virtual image comprising
an electronically captured, moving real world
image .
11.A method according to Claim 10, wherein the
method comprises displaying a virtual image
comprising sequence of still images taken from a
moving video.
12.A method according to Claim 10 or 11,
wherein the method comprises identifying a
context by comparing at least one object in the
field of view with stored data from a plurality
of objects.
13.A method according to any of Claims 10-12,
wherein the method comprises selecting an image
from a plurality of stored images according to
determined context information.
14.A method according to any of Claims 10-13,
wherein the method also comprises positioning the
virtual image according to context information
determined by the context identification unit.
15.A method according to any of Claims 10-14,
wherein the positioning of the image includes
sizing of the image in the display.
16. A method according to any of Claims 10-15,
wherein the positioning of the image also
includes anchoring the image in the display with
respect to context information determined by the
context identification unit.
17.A program for causing a device to perform a
method of displaying an augmented reality on a
display of a hand held device having a camera,
the method comprising identifying a context from
at least one real image captured by the device
and selecting and positioning a virtual image on
the display, wherein the method comprises
displaying a virtual image comprising an
electronically captured, moving real world image
18.A computer program product, storing,
carrying or transmitting thereon or therethrough
a program for causing a device to perform a
method of displaying an augmented reality on a
display of a hand held device having a camera,
the method comprising identifying a context from
at least one real image captured by the device
and selecting and positioning a virtual image on
the display, wherein the method comprises
displaying a virtual image comprising an
electronically captured, moving real world image.
| # | Name | Date |
|---|---|---|
| 1 | 2252-DELNP-2015-FORM 4 [10-03-2025(online)].pdf | 2025-03-10 |
| 1 | 2252-DELNP-2015.pdf | 2015-03-23 |
| 2 | 2252-DELNP-2015-FORM 4 [07-03-2024(online)].pdf | 2024-03-07 |
| 2 | PCT IB 304.pdf | 2015-03-28 |
| 3 | OTHER DOCUMENT.pdf | 2015-03-28 |
| 3 | 2252-DELNP-2015-Response to office action [17-03-2023(online)].pdf | 2023-03-17 |
| 4 | FORM 5.pdf | 2015-03-28 |
| 4 | 2252-DELNP-2015-IntimationOfGrant30-08-2022.pdf | 2022-08-30 |
| 5 | FORM 3.pdf | 2015-03-28 |
| 5 | 2252-DELNP-2015-PatentCertificate30-08-2022.pdf | 2022-08-30 |
| 6 | FORM 2 + SPECIFICATION.pdf | 2015-03-28 |
| 6 | 2252-DELNP-2015-FORM 3 [28-08-2020(online)].pdf | 2020-08-28 |
| 7 | 2252-DELNP-2015-PETITION UNDER RULE 137 [28-08-2020(online)].pdf | 2020-08-28 |
| 7 | 2252-delnp-2015-GPA-(11-06-2015).pdf | 2015-06-11 |
| 8 | 2252-delnp-2015-Form-1-(11-06-2015).pdf | 2015-06-11 |
| 8 | 2252-DELNP-2015-ABSTRACT [25-08-2020(online)].pdf | 2020-08-25 |
| 9 | 2252-DELNP-2015-CLAIMS [25-08-2020(online)].pdf | 2020-08-25 |
| 9 | 2252-delnp-2015-Correspondence Others-(11-06-2015).pdf | 2015-06-11 |
| 10 | 2252-DELNP-2015-COMPLETE SPECIFICATION [25-08-2020(online)].pdf | 2020-08-25 |
| 10 | 2252-delnp-2015-Others-(21-10-2015)..pdf | 2015-10-21 |
| 11 | 2252-delnp-2015-Correspondence Others-(21-10-2015).pdf | 2015-10-21 |
| 11 | 2252-DELNP-2015-CORRESPONDENCE [25-08-2020(online)].pdf | 2020-08-25 |
| 12 | 2252-delnp-2015-Copy Form-13-(21-10-2015).pdf | 2015-10-21 |
| 12 | 2252-DELNP-2015-DRAWING [25-08-2020(online)].pdf | 2020-08-25 |
| 13 | 2252-DELNP-2015-FER_SER_REPLY [25-08-2020(online)].pdf | 2020-08-25 |
| 13 | Other Document [23-10-2015(online)].pdf | 2015-10-23 |
| 14 | 2252-DELNP-2015-Information under section 8(2) [25-08-2020(online)].pdf | 2020-08-25 |
| 14 | Form 13 [23-10-2015(online)].pdf | 2015-10-23 |
| 15 | 2252-DELNP-2015-OTHERS [25-08-2020(online)].pdf | 2020-08-25 |
| 15 | Description(Complete) [23-10-2015(online)].pdf | 2015-10-23 |
| 16 | 2252-DELNP-2015-FORM 4(ii) [06-05-2020(online)].pdf | 2020-05-06 |
| 16 | Form 18 [07-09-2016(online)].pdf | 2016-09-07 |
| 17 | Form 3 [13-09-2016(online)].pdf | 2016-09-13 |
| 17 | 2252-DELNP-2015-FER.pdf | 2019-11-28 |
| 18 | 2252-DELNP-2015-FORM 3 [01-11-2018(online)].pdf | 2018-11-01 |
| 18 | Form 3 [24-03-2017(online)].pdf | 2017-03-24 |
| 19 | 2252-DELNP-2015-FORM 3 [01-11-2018(online)].pdf | 2018-11-01 |
| 19 | Form 3 [24-03-2017(online)].pdf | 2017-03-24 |
| 20 | 2252-DELNP-2015-FER.pdf | 2019-11-28 |
| 20 | Form 3 [13-09-2016(online)].pdf | 2016-09-13 |
| 21 | 2252-DELNP-2015-FORM 4(ii) [06-05-2020(online)].pdf | 2020-05-06 |
| 21 | Form 18 [07-09-2016(online)].pdf | 2016-09-07 |
| 22 | 2252-DELNP-2015-OTHERS [25-08-2020(online)].pdf | 2020-08-25 |
| 22 | Description(Complete) [23-10-2015(online)].pdf | 2015-10-23 |
| 23 | Form 13 [23-10-2015(online)].pdf | 2015-10-23 |
| 23 | 2252-DELNP-2015-Information under section 8(2) [25-08-2020(online)].pdf | 2020-08-25 |
| 24 | 2252-DELNP-2015-FER_SER_REPLY [25-08-2020(online)].pdf | 2020-08-25 |
| 24 | Other Document [23-10-2015(online)].pdf | 2015-10-23 |
| 25 | 2252-delnp-2015-Copy Form-13-(21-10-2015).pdf | 2015-10-21 |
| 25 | 2252-DELNP-2015-DRAWING [25-08-2020(online)].pdf | 2020-08-25 |
| 26 | 2252-delnp-2015-Correspondence Others-(21-10-2015).pdf | 2015-10-21 |
| 26 | 2252-DELNP-2015-CORRESPONDENCE [25-08-2020(online)].pdf | 2020-08-25 |
| 27 | 2252-DELNP-2015-COMPLETE SPECIFICATION [25-08-2020(online)].pdf | 2020-08-25 |
| 27 | 2252-delnp-2015-Others-(21-10-2015)..pdf | 2015-10-21 |
| 28 | 2252-DELNP-2015-CLAIMS [25-08-2020(online)].pdf | 2020-08-25 |
| 28 | 2252-delnp-2015-Correspondence Others-(11-06-2015).pdf | 2015-06-11 |
| 29 | 2252-DELNP-2015-ABSTRACT [25-08-2020(online)].pdf | 2020-08-25 |
| 29 | 2252-delnp-2015-Form-1-(11-06-2015).pdf | 2015-06-11 |
| 30 | 2252-DELNP-2015-PETITION UNDER RULE 137 [28-08-2020(online)].pdf | 2020-08-28 |
| 30 | 2252-delnp-2015-GPA-(11-06-2015).pdf | 2015-06-11 |
| 31 | FORM 2 + SPECIFICATION.pdf | 2015-03-28 |
| 31 | 2252-DELNP-2015-FORM 3 [28-08-2020(online)].pdf | 2020-08-28 |
| 32 | FORM 3.pdf | 2015-03-28 |
| 32 | 2252-DELNP-2015-PatentCertificate30-08-2022.pdf | 2022-08-30 |
| 33 | FORM 5.pdf | 2015-03-28 |
| 33 | 2252-DELNP-2015-IntimationOfGrant30-08-2022.pdf | 2022-08-30 |
| 34 | OTHER DOCUMENT.pdf | 2015-03-28 |
| 34 | 2252-DELNP-2015-Response to office action [17-03-2023(online)].pdf | 2023-03-17 |
| 35 | PCT IB 304.pdf | 2015-03-28 |
| 35 | 2252-DELNP-2015-FORM 4 [07-03-2024(online)].pdf | 2024-03-07 |
| 36 | 2252-DELNP-2015-FORM 4 [10-03-2025(online)].pdf | 2025-03-10 |
| 36 | 2252-DELNP-2015.pdf | 2015-03-23 |
| 1 | SearchStrategyMatrix_28-11-2019.pdf |