Abstract: ABSTRACT The present invention provides a method and system for converting a surface into a touch surface. In accordance with a disclosed embodiment, the system shall include a vision engine, configured to capture a set of location co-ordinates of a set of boundary points on the surface. The system shall further include a drawing interface, configured to create a set of mesh regions on the surface; a hash table configured to store; a point co-ordinate of each point of a mesh region; and a reference location co-ordinate of the each point. Further the system shall include an interpretation engine, configured to analyze a position of a user object on the surface and trigger a screen event on the position based on predetermined criteria. REF FIG: 1
METHODS, SYSTEMS AND COMPUTER-READABLE MEDIA FOR CONVERTING A
SURFACE TO A TOUCH SURFACE
BACKGROUND
[1] The invention relates generally to a method and system in touch screen technology. More specifically, the present invention relates to a method and system for converting a projected surface to a touch surface.
[2] Current technology to convert a flat surface such as a table or a wall, into an interactive touch surface involves usage of advanced depth sensing camera. The depth sensing camera is usually placed in front of the flat surface. For instance, if the flat surface is a table top, the depth sensing camera shall be placed on the ceiling, facing the table top. In another instance, where the flat surface is a projected screen, from a computer application, the depth sensing camera is usually placed in front of the projected screen, between the projected screen and the projector. When a user's movement of his finger, stylus or any object, on the flat surface occurs, the depth sensing camera can capture such movement. The movement is interpreted into one or more screen events, essential for making the flat surface a touch screen display.
[3] A disadvantage of aforesaid positions of the depth sensing camera is the flat surface may get obscured when the user appears before the depth sensing camera. As a result, the movement that may occur, during an obscured occurrence, may not be captured by the depth sensing camera. Thus there is a need for a method and system, wherein the depth sensing camera is placed in position other than the aforesaid directions, such that each position of the user can be captured.
[4] The alternate system and method must also interpret the movement of the object to a standard screen event of a mouse pointer of a computer screen. Thus a unique system and method for converting a flat surface to a touch screen is proposed.
SUMMARY OF THE INVENTION
[5] The present invention provides a method and system for converting a surface to a touch surface. In accordance with the disclosed embodiment, the method may include capturing
a set of location co-ordinates of a set of boundary points on the projected surface. Further, the method may include creating a set of mesh regions from the set of boundary points and mapping a location co-ordinate of each point in a mesh region, to a reference location co¬ordinate of the each point. Finally the method shall include the step of triggering a screen event at a position on the surface, based on predetermined criteria.
[6] In an additional embodiment, a system for converting a surface to a touch surface is disclosed. The system shall include a vision engine, configured to capture a set of location co-ordinates of a set of boundary points on the surface. The system shall further include a drawing interface, configured to create a set of mesh regions on the surface; a hash table configured to store; a point co-ordinate of each point of a mesh region; and a reference location co-ordinate of the each point. Further the system shall include an interpretation engine, configured to analyze a position of a user object on the surface and trigger a screen event on the position based on predetermined criteria.
[7] These and other features, aspects, and advantages of the present invention will be better understood with reference to the following description and claims.
BRIEF DESCRIPTION OF DRAWINGS
[8] FIG. 1 is a flowchart illustrating an embodiment of a method for converting a surface to a
touch surface. [9] FIG. 2 is a flowchart illustrating a preferred embodiment of a method for converting a
surface to a touch surface.
[10] FIG. 3 shows an exemplary system for converting a surface to a touch surface.
[11] FIG. 4 illustrates a generalized example of a computing environment 400.
[12] While systems and methods are described herein by way of example and
embodiments, those skilled in the art recognize that systems and methods for converting a surface to a touch surface, is not limited to the embodiments or drawings described. It should be understood that the drawings and description are not intended to be limiting to the particular form disclosed. Rather, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope of the appended claims. Any headings used herein are for organizational purposes only and are not meant to limit the scope of the description or the claims. As used herein, the word "may" is used in a
permissive sense (i.e., meaning having the potential to) rather than the mandatory sense (i.e., meaning must). Similarly, the words "include", "including", and "includes" mean including, but not limited to.
DETAILED DESCRIPTION
[13] Disclosed embodiments provide computer-implemented methods, systems, and
computer-program products for converting a surface to a touch surface. More specifically the methods and systems disclosed employ a sensor to capture a movement of an object on the surface and to interpret an action of the object into a standard screen event of a typical computer application. The sensor can be an available depth sensing camera such as Kinect developed by Microsoft Incorporation, USA.
[14] FIG. 1 is a flowchart that illustrates a method performed in converting a surface to
a touch surface in accordance with an embodiment of the present invention. At step 102, a set of location co-ordinates of a set of boundary points on the surface can be captured. The set of location co-ordinates is usually measured with respect to a sensor located in a perpendicular direction of the surface. In an embodiment, the set of location co-ordinates can refer to a set of the kinect co-ordinates where the kinect is the sensor in such embodiment. The sensor is capable of tracking a user and a predefined user interaction. Further, the set of boundary points can be captured by a predefined user interaction with the surface. In an instance, a user may place a finger or an object, on a point on the surface and utter a predefined word such as 'capture', signifying to an embedded vision engine to capture to the point as a boundary point. It could also be a simple gesture like raising a hand above shoulder to trigger the embedded vision engines. Further, at step 104, a set of mesh regions can be created from the set of boundary points. Each mesh region can basically include a subset area of the surface, such that the each mesh region shall include a subset of points of the surface. A point co-ordinate of each point in a mesh region can be mapped to a reference location co-ordinate of the each point, at step 106. In an embodiment, the reference location co-ordinates may refer to a computer resolution co-ordinate. The point co-ordinate of the each point is usually measured with respect to the sensor, and the reference location co-ordinate basically signifies a resolution of the surface. In an instance, the resolution of the surface can be 1024 * 768 pixels, indicating a
total number of points required to represent the surface. The reference location co-ordinate of the each point can be a pixel as per a resolution of a computer screen projected on the surface.
[15] Finally, based on predetermined criteria, at step 108, a screen event can be
triggered at a position on the surface, when an object interacts with the surface at the position. The screen event can include a single click mouse event, a double click mouse event or a drag operation, performed on a computer screen. The predetermined criteria can include, a movement of the object at the position; and time duration of contact of the object with the surface. For instance a touch at a point greater than a time threshold and object is removed from touch vicinity a double click is inferred. In one of the embodiments the time threshold may be 0.5 sec. If the object is in contact with the surface for a time greater than a threshold and there is movement with continued touch, a determination of the drag operation can be made. If there is a touch and quickly the object is removed from the vicinity, a single click is inferred.
[16] FIG. 2 illustrates an alternate embodiment of a method of practicing the present
invention. At step 202, a set of location co-ordinates of a set of boundary points on the surface can be captured via a predefined user interaction with the surface. A location co-ordinate is usually measured with respect to a sensor located in a perpendicular direction of the surface. The sensor can be a device for sensing a movement of a user in a line of sight of the sensor. In an instance the Kinect developed by Microsoft Incorporation may be used for the sensor. In one embodiment the sensor may be placed perpendicular to the surface and is able to track a predefined user interaction. In the instance, the set of location co-ordinates can be a set of the kinect co-ordinates. The set of boundary points, shall define area of the surface, intended to be converted into a touch screen. At step 204, location co-ordinate of each point of the set of boundary points can be stored in a hash table. At step 206, the set of location co-ordinates of the set of boundary points can be mapped to a set of reference location co-ordinates, where the reference location co-ordinates signify a resolution of the surface. In an embodiment, the set of reference location co-ordinates may refer to a set of computer resolution co-ordinates of a computer. Further, a set of mesh regions can be created from the set of boundary points, at step 208. A point co-ordinate of each point of a mesh region can be mapped to a reference location
co-ordinate of the each point, by a lookup procedure on the hash table, at step 210. The
hash table may include a memory hash table that can store the location co-ordinate of the
each point of the surface against the reference location co-ordinate of the each point. In
the disclosed embodiment, the reference location co-ordinate of the each point is a pixel as
per a resolution of a computer screen of the computer that is usually projected on the
surface.
[17] Further, at step 212, a determination of a contact of the object with the surface is
made. In an event a distance of the object from the surface is less than a threshold, the contact of the object with the surface can be interpreted, at step 214. In an event the object is at a distance greater than the threshold, the object may not be interpreted to make the contact with the surface. A point co-ordinate of a position of the contact of the object with the surface can be calculated at step 216, by a series of algorithms. At step 218, a reference location co-ordinate of the point co-ordinate can be retrieved from the hash table. When a map of the point co-ordinate does not exist in the hash table, then a nearest reference location co-ordinate to the point co-ordinate can be determined by a running a set of nearest point determination algorithms. In an embodiment, one of the series of algorithm for calculating point co-ordinate may include receiving of frames from the kinect device. Each of the received frame is a depth map that may be described as co-ordinate representing depth image resolution(x, y) and the depth value (z). The co¬ordinates of each point (x, y, z) in a frame are stored in a hash table. The mesh regions may totally be constructed through simple linear extrapolation and is stored in the hash table. In another embodiment, one of the nearest point determination algorithm may be used to calculate the nearest reference location co-ordinate which includes checking the all the depth points in a frame whose x, y, and z coordinate falls within the four corners of the touch surface. This is done by computing the minimum value of x, minimum value of y and minimum value of z from the data of the four corners of the surface. Similarly the maximum value of x, maximum value of y and maximum value of z are computed from the four corner values of the surface. This would give a set of points whose x, y, z fall within the minimum and maximum values of x, y, z of the corners of the touch surface. If there are no points after this computation it implies that there is no object near the touch surface. If there are one or more points after this computation then it implies that there is
an object within the threshold distance from the touch surface. From this set of points, those points which do not have a corresponding entry in the hash table are filtered out. From the filtered set of points the one value of x that occurs max number of times in the given depth map and whose distance from surface is below another threshold value is selected. The same selection process is repeated for y and z. This point x, y, z is selected and is matched in the hash table. The corresponding point from the hash table is extracted and is treated as the point of touch. A touch accuracy up to fifteen millimeter by fifteen millimeter of the touch surface can be achieved.
[18] Further based on a predetermined criteria, a screen event can be triggered at the
position on the surface, at step 220. The predetermined criteria may include a movement of the object at the position; and time duration of the contact of the object with the surface. Further, the screen event may include a single click mouse event, a double click mouse event or a drag operation on a standard computer screen.
[19] In alternate embodiments, the surface can be a LCD screen, a rear projection or a
front projection of a computer screen, a banner posted on a wall, a paper menu and the like. In an alternate embodiment, where the surface is the banner is posted on the wall, a set of dimensions of the banner and a plurality of information of the banner can be stored within a computer. When the user touches an image or a pixel co-ordinate on the banner, the Kinect can detect the position, and a relevant event on the pixel co-ordinate or on the image as configured may be fired. In an instance, where the banner is a hotel menu card, when the user points on a particular icon signifying a menu item, the computer can be programmed, to place an order for the menu item.
[20] FIG. 3 illustrates an exemplary system 300a in which various embodiments of the
invention can be practiced. The system comprises of a vision engine 302, a drawing interface 304, a hash table 308, an interpretation engine 310, a sensor 314, a surface 312, a projector 318 and a processor 316. A processor 316, can include the vision engine 302, the drawing interface 304, the hash table 308, and the interpretation engine 310. Further, the processor 316 can be communicatively coupled with the sensor 314, and a projector 318 that is placed facing the surface 312.
[21] The vision engine 302, is configured to capture a set of boundary points of the
surface 312, when a user 320, interacts with the surface 312, via an object 322, in a
predefined manner. The predefined manner may include the user 320, placing the object
322 on the surface 312 on the set of boundary points and uttering a word such as "capture"
on each boundary point. The set of boundary points shall define an area of the surface, to
be converted into a touch screen surface. The object 322, can be a finger of the user 320, a
stylus or any other material, that maybe used by the user 320, for performing an
interaction with the surface 312. The drawing interface 304, can be configured to draw a
set of mesh regions from the captured set of boundary points. Further, the hash table 308,
can be configured to store a point co-ordinate of each point of a mesh region and a
reference location co-ordinate of the each point. The point co-ordinate is usually
measured with respect to the sensor 314, whereas the reference location co-ordinate is
usually measured in reference to the resolution of the surface 312.
[22] The interpretation engine 310, can be configured to interpret an interaction of the
object 322 with the surface 312, as a standard screen event. Based on a distance of the object 322, from the surface 312, the interpretation engine 310, can determine whether the object 322, shall make a contact with the surface 312. In an instance, if the object 322, is at a distance less than a predetermined threshold. In one of the embodiment the threshold of distance may be 2 centimeters at a particular location of the screen. Other location may have a lesser threshold for the same setup. The interpretation engine 310 may interpret that the object 322 contacted with the surface 312. Further, the interpretation engine 310, can detect a position at which the object 322, makes the contact with the surface 312. Further, a point co-ordinate of a point at the position can be fetched from the sensor 314. The reference location co-ordinate of the point co-ordinate can be retrieved from the hash table 308. The interpretation engine 310, can be configured to determine a nearest reference location co-ordinate to the point co-ordinate, when a map of the point co¬ordinate is absent in the set of reference location co-ordinates. The interpretation engine 310 can be further configured to trigger a screen event on the position based on predetermined criteria. The predetermined criteria may include a movement of the object 322 at the position; and time duration of the contact of the object 322 with the surface. The screen event can include a standard screen event such as a single click mouse event, a double click or a drag operation. For instance, if the time for which the object 322 is in contact with the surface 312 is greater than a time threshold and object is removed from
touch vicinity a double click is inferred, and the screen event triggered can be a double click screen event. If the object is in contact with the surface for a time greater than a threshold and there is movement with continued touch, a determination of the drag operation can be made. If there is a touch and quickly the object is removed from the vicinity, a single click is inferred. The reference location co-ordinate of the each point can be a pixel as per a resolution of a computer screen projected on the surface.
[23] In the disclosed embodiment, the surface is a front projection of a computer
screen, where the projector 318, is placed in front of the surface 312. In an alternate embodiment, the surface may be a rear projection of the computer screen, where the projector 318, can be placed in a rear direction of the surface 312. In another embodiment, the surface can be an image mounted on a wall, such as a banner containing menu items displayed to a user at a shopping area.
[24] In yet another embodiment of the system, as illustrated in FIG. 3b, the surface 312,
can be a LED screen, communicatively coupled with the processor 318. In the disclosed embodiment, the sensor 314, can be communicatively coupled with the processor 318. The vision engine 302, the drawing interface 304, the hash table 308, and the interpretation engine 310 can be coupled within the processor 318, required for converting the surface 312, into a touch screen area. The implementation and working of the system may differ based on an application of the system. In an embodiment, where the surface is a banner posted on a wall, the dimensions of the banner can be stored within a memory of the processor 318. When the user touches a point on the banner, point-co-ordinates of the point shall be communicated to the processor 318, the vision engine 302, the hash table 308, and the interpretation engine 310, shall perform functions as described in aforementioned embodiments.
[25] One or more of the above-described techniques can be implemented in or
involve one or more computer systems. FIG. 4 illustrates a generalized example of a computing environment 400. The computing environment 400 is not intended to suggest any limitation as to scope of use or functionality of described embodiments.
[26] With reference to Fig. 4, the computing environment 400 includes at least one
processing unit 410 and memory 420. In Fig. 4, this most basic configuration 430 is included within a dashed line. The processing unit 410 executes computer-executable
instructions and may be a real or a virtual processor. In a multi-processing system, multiple processing units execute computer-executable instructions to increase processing power. The memory 420 may be volatile memory (e.g., registers, cache, RAM), non¬volatile memory (e.g., ROM, EEPROM, flash memory, etc.), or some combination of the two. In some embodiments, the memory 420 stores software 480 implementing described techniques.
[27] A computing environment may have additional features. For example, the
computing environment 400 includes storage 440, one or more input devices 440, one or more output devices 460, and one or more communication connections 470. An interconnection mechanism (not shown) such as a bus, controller, or network interconnects the components of the computing environment 400. Typically, operating system software (not shown) provides an operating environment for other software executing in the computing environment 400, and coordinates activities of the components of the computing environment 400.
[28] The storage 440 may be removable or non-removable, and includes magnetic
disks, magnetic tapes or cassettes, CD-ROMs, CD-RWs, DVDs, or any other medium which can be used to store information and which can be accessed within the computing environment 400. In some embodiments, the storage 440 stores instructions for the software 480.
[29] The input device(s) 450 may be a touch input device such as a keyboard, mouse,
pen, trackball, touch screen, or game controller, a voice input device, a scanning device, a digital camera, or another device that provides input to the computing environment 400. The output device(s) 460 may be a display, printer, speaker, or another device that provides output from the computing environment 400.
[30] The communication connection(s) 470 enable communication over a
communication medium to another computing entity. The communication medium conveys information such as computer-executable instructions, audio or video information, or other data in a modulated data signal. A modulated data signal is a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media
include wired or wireless techniques implemented with an electrical, optical, RF, infrared, acoustic, or other carrier.
[31] Implementations can be described in the general context of computer-readable
media. Computer-readable media are any available media that can be accessed within a computing environment. By way of example, and not limitation, within the computing environment 400, computer-readable media include memory 420, storage 440, communication media, and combinations of any of the above.
[32] Having described and illustrated the principles of our invention with reference to
described embodiments, it will be recognized that the described embodiments can be modified in arrangement and detail without departing from such principles. It should be understood that the programs, processes, or methods described herein are not related or limited to any particular type of computing environment, unless indicated otherwise. Various types of general purpose or specialized computing environments may be used with or perform operations in accordance with the teachings described herein. Elements of the described embodiments shown in software may be implemented in hardware and vice versa.
[33] As will be appreciated by those ordinary skilled in the art, the foregoing
example, demonstrations, and method steps may be implemented by suitable code on a processor base system, such as general purpose or special purpose computer. It should also be noted that different implementations of the present technique may perform some or all the steps described herein in different orders or substantially concurrently, that is, in parallel. Furthermore, the functions may be implemented in a variety of programming languages. Such code, as will be appreciated by those of ordinary skilled in the art, may be stored or adapted for storage in one or more tangible machine readable media, such as on memory chips, local or remote hard disks, optical disks or other media, which may be accessed by a processor based system to execute the stored code. Note that the tangible media may comprise paper or another suitable medium upon which the instructions are printed. For instance, the instructions may be electronically captured via optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
[34] The following description is presented to enable a person of ordinary skill in the
art to make and use the invention and is provided in the context of the requirement for a obtaining a patent. The present description is the best presently-contemplated method for carrying out the present invention. Various modifications to the preferred embodiment will be readily apparent to those skilled in the art and the generic principles of the present invention may be applied to other embodiments, and some features of the present invention may be used without the corresponding use of other features. Accordingly, the present invention is not intended to be limited to the embodiment shown but is to be accorded the widest scope consistent with the principles and features described herein.
[35] While the foregoing has described certain embodiments and the best mode of
practicing the invention, it is understood that various implementations, modifications and examples of the subject matter disclosed herein may be made. It is intended by the following claims to cover the various implementations, modifications, and variations that may fall within the scope of the subject matter described.
CLAIMS
What is claimed is:
1. A method of converting a surface to a touch surface, the method comprising:
capturing a set of location co-ordinates of a set of boundary points on the surface; creating a set of mesh regions from the set of boundary points; mapping a point co-ordinate of each point in a mesh region, to a reference location co-ordinate of the each point; and
triggering a screen event at a position on the surface, based on a predetermined criteria.
2. The method of claim 1, wherein a location co-ordinate is measured with respect to a sensor located in a perpendicular direction of the surface and is able to track a user and predefined user interaction.
3. The method of claim 1, wherein the set of boundary points is obtained by the predefined user interaction with the surface.
4. The method of claim 2, further comprising:
storing the point co-ordinate of the each point in a hash table; and mapping the set of location co-ordinates of the set of boundary points to a set of reference location co-ordinates.
5. The method of claim 1, wherein the step of triggering a screen event comprises:
determining a contact of an object with the surface using an interpretation engine;
calculating a point co-ordinate of a position, whereby the position is the contact of the object with the surface; and
retrieving a reference location co-ordinate of the point co-ordinate from the hash table
6. The method of claim 5, wherein the step of retrieving a reference location co-ordinate
further comprises:
determining a nearest reference location co-ordinate to the point co-ordinate, when a map of the point co-ordinate is absent in the set of reference location co-ordinates.
7. The method of claim 1, wherein the reference location co-ordinate of the each point is a pixel as per a resolution of a computer screen projected on the surface.
8. The method of claim 5, wherein the step of determining a contact of an object comprises:
interpreting a contact, when a distance of the object from the surface is less than a threshold.
9. The method of claim 1, wherein the screen event comprises one or more of a single click mouse event, a double click mouse event and a drag operation.
10. The method of claim 1, wherein the set of mesh regions is within the set of boundary points.
11. The method of claim 5, wherein the predetermined criteria comprises:
a movement of the object at the position; and
a time duration of the contact of the object with the surface.
12. The method of claim 1, wherein the surface is one or more of a LCD screen, a rear projection of a computer screen, a front projection of a computer screen, and a paper image mounted on a wall.
13. A system for converting a surface to a touch surface, the system comprising:
a vision engine, configured to:
capture a set of location co-ordinates of a set of boundary points on the surface;
a drawing interface, configured to create a set of mesh regions from the captured
set of boundary points;
a hash table configured to store:
a point co-ordinate of each point of a mesh region; and
a reference location co-ordinate of the each point; an interpretation engine, configured to
analyze a position of an object on the surface; and
trigger a screen event on the position based on a predetermined criteria.
14. The system of claim 13, wherein a location co-ordinate is measured with respect to a sensor located in a perpendicular direction of the surface and is able to track a user and a predefined user interaction.
15. The system of claim 13, wherein the set of boundary points is obtained by the predefined user interaction with the surface.
16. The system of claim 13, wherein the hash table is further configured to store a set of reference location co-ordinates of the set of boundary points.
17. The system of claim 13, wherein the interpretation engine is further configured to:
determine a contact of the object with the surface, based on a distance of the object from the surface; detect the position of the object on the surface; fetch a point co-ordinate of a point at the position, from the sensor; and retrieve a reference location co-ordinate of the point co-ordinate from the hash table.
18. The system of claim 17, wherein the interpretation engine is further configured to:
determine a nearest reference location co-ordinate to the point co-ordinate, when a map of the point co-ordinate is absent in the set of reference location co-ordinates.
19. The system of claim 13, wherein the reference location co-ordinate of the each point is a pixel as per a resolution of a computer screen projected on the surface.
20. The system of claim 17, wherein the interpretation engine is configured to determine the contact of the user object with the surface when the distance of the object from the surface is less than a threshold.
21. The system of claim 13, wherein the screen event comprises one or more of a single click mouse event, a double click mouse event and a drag operation.
22. The system of claim 13, wherein the set of mesh areas is within the set of boundary points.
23. The system of claim 13, wherein the predetermined criteria comprises:
a movement of the object at the position; and
a time duration of the contact of the object with the surface.
24. The system of claim 13, wherein the surface includes one or more of a LCD screen, a rear projection of a computer screen, a front projection of the computer screen and a paper image mounted on a wall.
25. The system of claim 13, wherein the sensor is communicatively coupled with a processor, whereby the processor comprises the system.
26. The system of claim 24, wherein;
the LCD screen is communicatively coupled with the processor; and the rear projection and the front projection is by a projector, the projector being communicatively coupled with the processor.
27. A computer program product consisting of a plurality of program instructions stored on a
non-transitory computer-readable medium that, when executed by a computing device,
performs a method for converting a surface to a touch surface, the method comprising:
capturing a set of location co-ordinates of a set of boundary points on the surface;
creating a set of mesh regions from the set of boundary points; mapping a point co-ordinate of each point in a mesh region, to a reference location co-ordinate of the each point; and
triggering a screen event at a position on the surface, based on a predetermined criteria.
| # | Name | Date |
|---|---|---|
| 1 | 3029-CHE-2014 FORM-5 23-06-2014.pdf | 2014-06-23 |
| 1 | 3029-CHE-2014-IntimationOfGrant13-04-2023.pdf | 2023-04-13 |
| 2 | 3029-CHE-2014-PatentCertificate13-04-2023.pdf | 2023-04-13 |
| 2 | 3029-CHE-2014 FORM-3 23-06-2014.pdf | 2014-06-23 |
| 3 | 3029-CHE-2014-FORM 13 [07-10-2020(online)].pdf | 2020-10-07 |
| 3 | 3029-CHE-2014 FORM-2 23-06-2014.pdf | 2014-06-23 |
| 4 | 3029-CHE-2014-FORM-26 [07-10-2020(online)].pdf | 2020-10-07 |
| 4 | 3029-CHE-2014 FORM-1 23-06-2014.pdf | 2014-06-23 |
| 5 | 3029-CHE-2014-RELEVANT DOCUMENTS [07-10-2020(online)].pdf | 2020-10-07 |
| 5 | 3029-CHE-2014 DRAWINGS 23-06-2014.pdf | 2014-06-23 |
| 6 | 3029-che-2014-CLAIMS [11-11-2019(online)].pdf | 2019-11-11 |
| 6 | 3029-CHE-2014 DESCRIPTION (COMPLETE) 23-06-2014.pdf | 2014-06-23 |
| 7 | 3029-che-2014-FER_SER_REPLY [11-11-2019(online)].pdf | 2019-11-11 |
| 7 | 3029-CHE-2014 CORRESPONDENCE OTHERS 23-06-2014.pdf | 2014-06-23 |
| 8 | 3029-che-2014-OTHERS [11-11-2019(online)].pdf | 2019-11-11 |
| 8 | 3029-CHE-2014 CLAIMS 23-06-2014.pdf | 2014-06-23 |
| 9 | 3029-CHE-2014-FER.pdf | 2019-05-16 |
| 9 | 3029-CHE-2014 ABSTRACT 23-06-2014.pdf | 2014-06-23 |
| 10 | 3029-CHE-2014 CORRESPONDENCE OTHERS 10-09-2014.pdf | 2014-09-10 |
| 10 | 3029-CHE-2014 FORM-18 30-04-2015.pdf | 2015-04-30 |
| 11 | 3029-CHE-2014 FORM-1 10-09-2014.pdf | 2014-09-10 |
| 11 | abstract 3029-CHE-2014.jpg | 2015-02-05 |
| 12 | 3029-CHE-2014 FORM-1 10-09-2014.pdf | 2014-09-10 |
| 12 | abstract 3029-CHE-2014.jpg | 2015-02-05 |
| 13 | 3029-CHE-2014 CORRESPONDENCE OTHERS 10-09-2014.pdf | 2014-09-10 |
| 13 | 3029-CHE-2014 FORM-18 30-04-2015.pdf | 2015-04-30 |
| 14 | 3029-CHE-2014 ABSTRACT 23-06-2014.pdf | 2014-06-23 |
| 14 | 3029-CHE-2014-FER.pdf | 2019-05-16 |
| 15 | 3029-CHE-2014 CLAIMS 23-06-2014.pdf | 2014-06-23 |
| 15 | 3029-che-2014-OTHERS [11-11-2019(online)].pdf | 2019-11-11 |
| 16 | 3029-CHE-2014 CORRESPONDENCE OTHERS 23-06-2014.pdf | 2014-06-23 |
| 16 | 3029-che-2014-FER_SER_REPLY [11-11-2019(online)].pdf | 2019-11-11 |
| 17 | 3029-CHE-2014 DESCRIPTION (COMPLETE) 23-06-2014.pdf | 2014-06-23 |
| 17 | 3029-che-2014-CLAIMS [11-11-2019(online)].pdf | 2019-11-11 |
| 18 | 3029-CHE-2014 DRAWINGS 23-06-2014.pdf | 2014-06-23 |
| 18 | 3029-CHE-2014-RELEVANT DOCUMENTS [07-10-2020(online)].pdf | 2020-10-07 |
| 19 | 3029-CHE-2014-FORM-26 [07-10-2020(online)].pdf | 2020-10-07 |
| 19 | 3029-CHE-2014 FORM-1 23-06-2014.pdf | 2014-06-23 |
| 20 | 3029-CHE-2014-FORM 13 [07-10-2020(online)].pdf | 2020-10-07 |
| 20 | 3029-CHE-2014 FORM-2 23-06-2014.pdf | 2014-06-23 |
| 21 | 3029-CHE-2014-PatentCertificate13-04-2023.pdf | 2023-04-13 |
| 21 | 3029-CHE-2014 FORM-3 23-06-2014.pdf | 2014-06-23 |
| 22 | 3029-CHE-2014-IntimationOfGrant13-04-2023.pdf | 2023-04-13 |
| 22 | 3029-CHE-2014 FORM-5 23-06-2014.pdf | 2014-06-23 |
| 1 | search_29-10-2018.pdf |