Abstract: The present disclosure addresses an issue of providing hand swipe gestures as input to interact with a smart device. The method comprises of obtaining one or more images containing RGB data from the smart device. The one or more images are processed to down scale the resolution and obtain one or more down scaled images and obtaining foreground region by detecting skin using chroma channel information. Further detecting a plurality of key shi tomasi features from the foreground region and tracking hand motion by employing the plurality of key shi tomasi features in a Lukas kande optical flow to obtain a plurality of optical flow trajectories. Further egocentric correction is performed on the plurality of optical flow trajectories to estimate corresponding displacement vector directions. Subsequently hand swipe is classified based on the corresponding displacement vector directions.
Claims:1. A method for hand gesture interactions for smart devices, the method comprising:
capturing one or more images containing RGB (Red Green Blue) data from a camera of a smart device;
processing the one or more images to down-scale resolution of the one or more images to obtain one or more down scaled images;
obtaining a foreground hand region by detecting skin using chroma channel information of the one or more down scaled images;
detecting a plurality of key Shi-Tomasi feature points from the foreground hand region;
tracking a hand motion by employing the plurality of key Shi-Tomasi feature points in a Lukas-Kanade optical flow using pyramidal approach and the plurality of key Shi-tomasi feature points are tracked frame by frame to determine a plurality of optical flow trajectories of the Lukas-Kande optical flow;
performing an egocentric correction on the plurality of the optical flow trajectories to estimate corresponding displacement vector directions by eliminating irregularities caused due to movement of head; and
classifying a hand swipe based on the corresponding displacement vector directions over the entire duration of gesture of the hand swipe.
2. The method as claimed in claim 1, wherein the step of obtaining the foreground hand region includes detecting skin pixels and segmenting a large contour object in the one or more down scaled images that correspond to palm.
3. The method as claimed in claim 1, wherein the step of tracking a hand motion is performed after hand area threshold is obtained heuristically based on a wearable device attached to the smart device.
4. The method as claimed in claim 3, further comprising determining a minimum number of the key Shi-Tomasi feature points to be tracked and re-employing the plurality of key Shi-Tomasi feature points on subsequent frames of the one or more down scaled images.
5. The method as claimed in claim 1, wherein classifying the hand swipe includes determining a resultant swipe direction for short intervals.
6. The method as claimed in claim 5, wherein the resultant swipe direction for short intervals is sum of one or more weighted short interval directions, and wherein weights to the one or more short interval directions are assigned in proportion to duration of time intervals.
7. The method as claimed in claim 6, wherein each of the one or more short interval directions are sum of weighted displacement vector directions.
8. A system 100 for hand gesture interactions for smart devices, the system 100 comprising:
at least one processor 102; and
a memory 104 communicatively coupled to the at least one processor 102, wherein the memory 104 comprises
a swipe detection module 106 to:
capture one or more images containing RGB (Red, Green and Blue) data from a camera of a smart device;
process the one or more images to down-scale resolution of the one or more images to obtain one or more down scaled images;
obtain a foreground hand region by detecting skin using chroma channel information of the one or more down scaled images;
detect a plurality of key Shi-Tomasi feature points from the foreground hand region;
track a hand motion by employing the plurality of key Shi-Tomasi feature points in a Lukas-Kanade optical flow using pyramidal approach and the plurality of key Shi-tomasi feature points are tracked frame by frame to determine a plurality of optical flow trajectories of the Lukas-Kande optical flow;
perform an egocentric correction on the plurality of the optical flow trajectories to estimate corresponding displacement vector directions by eliminating irregularities caused due to movement of head; and
classify a hand swipe based on the corresponding displacement vector directions over the entire duration of gesture of the hand swipe.
9. The system as claimed in claim 8, wherein the step of obtaining the foreground hand region includes detecting skin pixels and segmenting a large contour object in the one or more down scaled images that correspond to palm.
10. The system as claimed in claim 8, wherein the step of tracking a hand motion is performed after hand area threshold is obtained heuristically based on a wearable device attached to the smart device.
11. The system as claimed in claim 10, further comprising determining a minimum number of the key Shi-Tomasi feature points to be tracked and re-employing the key Shi-Tomasi feature points on subsequent frames.
12. The system as claimed in claim 8, wherein classifying the hand swipe includes determining a resultant swipe direction for short intervals.
13. The system as claimed in claim 12, wherein the resultant swipe direction for short intervals is sum of one or more weighted short interval directions, and wherein weights to the one or more short interval directions are assigned in proportion to duration of time intervals.
14. The system as claimed in claim 13, wherein each of the one or more short interval directions are sum of weighted displacement vector directions.
, Description:FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See Section 10 and Rule 13)
Title of invention:
SYSTEM AND METHOD FOR HAND GESTURE INTERACTIONS FOR SMART DEVICES
Applicant:
Tata Consultancy Services Limited
A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th Floor,
Nariman Point, Mumbai 400021,
Maharashtra, India
The following specification particularly describes the invention and the manner in which it is to be performed
TECHNICAL FIELD
The embodiments herein generally relates to hand gestures and, more particularly to hand gesture interactions for smart devices.
BACKGROUND
Traditionally, interaction between a smart device and a human used a hardware interface like keyboard and mouse. In recent times, with the development of technology, the hardware interface is replaced with body. Given the intuitive gestures, natural features, the gestures have become an important means of natural human-computer interaction.
Hand gestures form a popular means of interaction between computers, wearables, robots and humans which have been discussed in many recent works exemplified with the Augmented Reality (AR) devices such as Microsoft KinectTM, HololensTM, GoogleTM Glasses. AR involves overlaying computer simulated imagery on top of the real world as seen through the optical lens of the device or camera of a smartphone/tablet.
On other hand, the device side advances in smart devices technology has introduced several low-cost alternatives such as Google CardboardTM and Wearability which are video-see-through devices for providing immersion experiences with a VR enabled smartphone. By using the stereo-rendering of camera feed and overlaying the related information on the smart device screen, the smart devices can also be extended for use in AR applications.
The extension of Google CardboardTM, which was initially envisioned for VR based applications, was extended to AR. Their framework uses GoogleTM speech engine for speech based user interaction, which requires constant network connectivity for communicating with Google servers. But, in industrial and outdoor setting, the speech recognition is found to be inaccurate due to ambient noise.
BRIEF DESCRIPTION OF DRAWINGS
FIG. 1 illustrates a system for hand gesture interaction for a smart device, according to an embodiment of the present subject matter.
FIG. 2 is a block diagram explaining a flow of hand swipe recognition, according to some embodiments of the present subject matter.
FIG. 3 is an example of flow vector representing feature point movement across subsequent frames showing anomalies, according to some embodiments of the present subject matter.
FIG. 4 is an example of the time intervals with feature points threshold less than and greater than 40%, according to some embodiments of the present subject matter.
FIG. 5 is a flowchart illustrating a method for hand gesture interactions for smart devices, according to some embodiments of the present subject matter.
SUMMARY
Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems. For example, in one embodiment, a method for hand gesture interactions for smart devices is disclosed. The method includes capturing the one or more images containing RGB data from a camera of a smart device. The images obtained are processed to down scale resolution of the one or more images to obtain one or more down scaled images. Further, foreground hand region of the hand is obtained using chroma channel information of the one or more down scaled images. Subsequently, key Shi-Tomasi feature points are detected from the foreground hand region. The key Shi-Tomasi feature points are employed in Lukas-Kanade optical flow using pyramidal approach to track hand motion. Further, egocentric correction is performed on the plurality of the optical flow trajectories to estimate corresponding displacement vector directions by eliminating irregularities caused due to movement of head. Further, the hand swipe is classified based on the corresponding displacement vector directions over the entire duration of gesture of the hand motion
In another embodiment, a system for hand gesture interactions for smart devices is disclosed. The system includes at least one processor, and a memory communicatively coupled to the at least one processor, wherein the memory comprises of several modules. The modules include swipe detection module that captures one or more images containing RGB data from a camera of a smart device. The one or more images obtained are processed to down scale resolution of the image to obtain one or more down scaled images. Further, foreground hand region of the hand is obtained using chroma channel information of the one or more down scaled images. Subsequently, key Shi-Tomasi feature points are detected from the foreground hand region. The key Shi-Tomasi feature points are employed in Lukas-Kanade optical flow using pyramidal approach to track hand motion. Further, egocentric correction is performed on the plurality of the optical flow trajectories to estimate corresponding displacement vector directions by eliminating irregularities caused due to movement of head. Further, the hand swipe is classified based on the corresponding displacement vector directions over the entire duration of gesture of the hand motion.
It is to be understood that both the foregoing general description and the following detailed description are explanatory only and are not restrictive of the invention, as claimed.
DETAILED DESCRIPTION OF THE EMBODIMENTS
Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the spirit and scope of the disclosed embodiments. It is intended that the following detailed description be considered as exemplary only, with the true scope and spirit being indicated by the following claims.
The manner in which the described system is implemented to evaluate reviewer’s ability to provide feedback has been explained in detail with respect to the following figure(s). While aspects of the described system can be implemented in any number of different computing systems, transmission environments, and/or configurations, the embodiments are described in the context of the following exemplary system.
FIG. 1 illustrates a system 100 for hand gesture interaction for a smart device, according to an embodiment of the present subject matter. As shown in FIG. 1, the system 100 includes one or more processor(s) 102, a memory 104, and interface(s) 108 communicatively coupled to each other. Further, the memory 104 includes a swipe detection module 106. The processor(s) 102, the memory 104, and the interface(s) 108 may be communicatively coupled by a system bus such as a system bus or a similar mechanism. Although FIG. 1 shows example components of the system 100, in other implementations, the system 100 may contain fewer components, additional components, different components, or differently arranged components than depicted in FIG. 1.
The processor(s) 102 may include circuitry implementing, among others, audio and logic functions associated with the communication. The processor(s) 102 may include, among other things, a clock, an arithmetic logic unit (ALU) and logic gates configured to support operation of the processor(s) 102. The processor(s) 102 can be a single processing unit or a number of units, all of which include multiple computing units. The processor(s) 102 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor(s) 102 is configured to fetch and execute computer-readable instructions and data stored in the memory 104.
The functions of the various elements shown in the figure, including any functional blocks labeled as “processor(s)”, may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared. Moreover, explicit use of the term “processor” should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor (DSP) hardware, network processor, application specific integrated circuit (ASIC), field programmable gate array (FPGA), read only memory (ROM) for storing software, random access memory (RAM), and non-volatile storage. Other hardware, conventional, and/or custom, may also be included.
The interface(s) 108 may include a variety of software and hardware interfaces, for example, interfaces for peripheral device(s), such as a keyboard, a mouse, an external memory, and a printer. The interface(s) 108 can facilitate multiple communications within a wide variety of networks and protocol types, including wired networks, for example, local area network (LAN), cable, etc., and wireless networks, such as Wireless LAN (WLAN), cellular, or satellite. For the purpose, the interface(s) 108 may include one or more ports for connecting the system 100 to other devices.
The memory 104 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. The memory 104, may store any number of pieces of information, and data, used by the system 100 to implement the functions of the system 100. The memory 104 may be configured to store information, data, applications, instructions or the like for enabling the system 100 to carry out various functions in accordance with various example embodiments. Additionally or alternatively, the memory 104 may be configured to store instructions which when executed by the processor(s) 102 causes the system 100 to behave in a manner as described in various embodiments. The memory 104 includes a swipe detection module 106 and/or other modules. The swipe detection module 106 includes routines, programs, objects, components, data structures, etc., which perform particular tasks or implement particular abstract data types. The other modules may include programs or coded instructions that supplement applications and functions of the system 100.
FIG. 2 is a block diagram explaining a flow of hand swipe recognition, according to some embodiments of the present subject matter. In FIG. 2(a), foreground hand area is obtained from the background of the image based on skin color filtering of the image. Further FIG. 2(b) shows the hand frame obtained by utilizing the largest contour of the image. FIG. 2(c) shows the key shi tomasi feature point extraction on the hand segment for tracking movement of the hand and FIG 2(d) depicts the overall flow trajectory on the sequence of the frames to determine the direction of hand swipe.
In operation, the swipe detection module 106 obtains one or more images containing RGB (Red, Green and Blue) data from a camera (or an image capturing device/module) of a smart device. Examples of the smart devices include wearable devices that are capable of capturing images. The images captured are further processed to down scale the resolution. For example, the images are downscaled to a resolution of 640X480 resolution to reduce the processing time without compromising on image quality. Subsequently, in an embodiment, a method for detecting a hand in the images processed is disclosed. The swipe detection module 106 includes detecting skin pixels using chroma channel information of the images. Generally, histogram of chroma channels (Cb and Cr) of the image exhibit unimodal distribution. However, changing luminosity of the chroma channels results in multimodal Y channel histogram. The luminosity of chroma channels is utilized to characterize the brightness of a particular chrominance value.
The present disclosure utilizes the techniques of a face segmentation algorithm,that exploit the spatial characteristics of human skin color using chroma channel values. Below equation describes the threshold values of the chroma channels for the human skin color of the filters used for segmenting the hand region from the background scene:
77 < Cb < 127
< Cr < 173
By applying the above equation of threshold values in the luminosity of chroma channels obtained for the image, the system obtains the hand region from the background scene in the image.
However, the background scene may also have skin color region that is not part of the hand. To avoid generating skin like color from the background, the system 100 retains only the largest contour by contour segmentation using topological structural analysis of digitized binary images by border following algorithm/technique.
In an embodiment, a method for tracking the movement of the hand is disclosed. The swipe detection module 106 utilizes the obtained foreground hand region to obtain a plurality of key Shi-Tomasi feature points. The plurality of key shi tomasi feature points are further tracked by employing the key shi tomasi feature points with Lukas- Kanade optical flow using pyramidal approach to track the movement of the hand.
However, the tracking is initialized for the detected key shi tomasi feature points after two following conditions are satisfied:
After the hand region occupies a certain area in user field of view (FoV). The area threshold can be heuristically determined based on the smart device being used and typical distance of user hand from the wearable (26% of area has been chosen as threshold for Google CardboardTM based application discussed in results Section). This helps to avoid tracking of false alarm contours.
A minimum number of feature points attained on the foreground hand region are obtained (40 points used as threshold in experiments). This helps in achieving better accuracy by tracking good number of points for the cases of blurry vision from the smart device when hand is held very close. Feature points will be re-employed on the subsequent frame as the number of points on blurry image will be very less and also swipe gestures typically involves user palm region in the FoV that is smooth thereby reducing the number of points.
After the two conditions are satisfied, the tracking is initialized, the key shi tomasi feature points are tracked frame by frame and resultant displacement vectors are calculated.
However, there are small irregularities due to the motion of the head that gives rise to anomalous results. To avoid the anomalous results, the system 100 uses resultant displacement vectors to compute the overall flow trajectory of the feature points to track the movement of the hand.
In an embodiment, a method for determining the direction of the movement of the hand swipe is disclosed. The direction of the hand swipe provides the classification of the hand swipe for interacting with a smart device. The swipe detection module 106 utilizes a classification algorithm that computes flow vector displacement over the entire duration of gesture. FIG. 3 is an example of flow vector representing feature point movement across subsequent frames showing anomalies, according to some embodiments of the present subject matter.
Generally, corresponding points of key shi tomasi features in subsequent frames are utilized for computing flow vector displacement over the entire duration of gesture. However, if the initialized key shi tomasi feature point’s corresponding points are lost by a significant amount in subsequent frames, the direction of swipe ?i for that particular interval i is calculated using the following equation:
?i = 1/M (?_(k=1)^n¦mk ?k ) …….. Equation (1)
Where M = ?_(k=1)^N¦mk
Where mk and qk are the magnitude and direction of displacement vector for each key shi tomasi feature point respectively. M is the resultant magnitude of displacement vectors of key shi tomasi feature points and N is the number of feature points retained over interval i.
However, in a typical user hand moving from one end to the other end of the frame, displacement vector directions are of higher magnitude. Therefore, the displacement vector directions are given weights in proportion with their magnitudes as shown in Equation 1, thereby reducing weights to the false positive features tracked in local neighborhood. The process can be repeated by re-employing the feature points and tracking for next interval (i+1) till the foreground hand region goes out of FoV. This step helps in improving robustness of algorithm/technique by avoiding false alarm recognition for the cases when camera goes out of focus for very short duration of gesture. FIG. 4 is an example of the time intervals with feature points threshold less than and greater than 40% according to some embodiments of the present subject matter.
A resultant swipe direction of ?f of user swipe can be obtained based on one or more weighted short interval directions. In an embodiment of the present disclosure, the resultant swipe direction of ?f of user swipe can be obtained by calculating sum of the one or more weighted short interval directions. The sum of weighted short interval directions is obtained from ?i values calculated for all the short intervals over the entire duration of hand gesture using the following equation.
?f = 1/T (?_(i=1)^n¦ti ?i ) -------- Equation 2
where ti is time taken for interval i, T is the sum of all ti’s, and n is the total number of intervals. In Equation 2, feature points sustained for longer duration are given higher priority thus improving the accuracy. This resultant direction ?f is used for recognizing the swipe gesture with tolerance band of ±35 (example, -35= ?f =35 for Right swipe).
In an embodiment, an example for hand gesture interactions is disclosed. Hand swipe detection experiments as an application on a frugal VR/AR device Google CardboardTM. Hence, an application has been developed to work on smartphones running Android OSTM. The stereo camera view on the smartphone helps the application to be used with Google CardboardTM. The detected gesture result is overlaid after the single swipe of the hand. The hand swipe dataset used in the example to test gesture recognition is composed of 100 sample videos recorded using Android smartphones, when used inside Google CardboardTM. The information regarding the direction of swipe was labelled manually. The videos were recorded in different places (living room, indoor office setting, among others) in order to gather variations in lighting conditions, color compositions and dynamic background scene. The videos were recorded with Nexus 6TM and Moto G3TM smartphones at 25 frames per second. The dataset according to present disclosure comprises of 4 different swipes (Left, Right, Up, Down). Google CardboardTM VR SDK for Android is used for developing smartphone application for Gesture recognition from wearable Google CardboardTM. OpenCV SDKTM for Android has been used for implementing the proposed approach to run on Android smartphone devices.
In order to analyze the efficacy of the algorithm/technique of the present disclosure, the example considers evaluating it on the hand swipe dataset of 100 sample videos and compare the same with Nanogest air gesture recognition. The obtained results were expressed using multiclass confusion matrices, a statistical tool used to analyse the efficiency of a classification model.
Predicted
Gesture
True
gesture
Up
Down
Left
Right
Unclassified
Up 21 0 1 1 2
Down 0 24 0 0 1
Left 1 0 22 0 2
Right 1 1 0 21 2
Table 1
Predicted
Gesture
True
gesture
Up
Down
Left
Right
Unclassified
Up 20 0 0 0 5
Down 1 21 0 0 3
Left 0 0 21 1 3
Right 0 0 2 21 2
Table 2
Table 1 gives the experimental results of the present disclosure and the table 2 provides the results of the prior art Nanogest. The rows are the actual gestures, with the columns containing the predicted results. Higher the diagonal values, better the accuracy of the classification task. The confusion matrix is used to analyze each class separately using precision and recall values. The experimental results of the proposed approach are summarized in Table 1, and show that the algorithm/technique of the present disclosure yields 88 correct classifications with 12 misclassifications. The misclassifications were analyzed and root cause for the same was determined: three misclassifications were observed due to the ambiguity of the direction of the swipe. The other nine misclassifications resulted due to a high proximity of the hand to the camera, making the obtained feed blurry beyond comprehensive threshold. The confusion matrices of the other techniques, showed an accuracies of 83% and 73% under similar conditions. The obtained results show that the present disclosure produces superior results with negligible latency. The turnaround time for prior arts was 0.885s while the average response time for the proposed approach was found to be 0.19s.
FIG. 5 is a flowchart illustrating a method for hand gesture interactions for smart devices, according to some embodiment of the present disclosure. At block 502, one or more images containing RGB data are captured from a camera of a smart device. At block 504, the one or more images are processed to down scale the resolution of the images. At block 506, obtaining a foreground by detecting skin using chroma channel information. At block 508, a plurality of key shi tomasi features from the foreground region are detected. At block 510, hand motion is tracked by employing the plurality of key Shi-Tomasi feature points in Lukas-Kanade optical flow using pyramidal approach. At block 512, egocentric correction is performed on the optical flow trajectories to estimate corresponding displacement vector direction. At block 514, based on the corresponding displacement vector direction, hand swipe is classified which is taken as input to smart device.
It is to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g. any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g. hardware means like e.g. an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g. an ASIC and an FPGA, or at least one microprocessor and at least one memory with software modules located therein. Thus, the means can include both hardware means and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software means. Alternatively, the embodiments may be implemented on different hardware devices, e.g. using a plurality of CPUs.
The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various modules described herein may be implemented in other modules or combinations of other modules. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant arts based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
It is intended that the disclosure and examples be considered as exemplary only, with a true scope and spirit of disclosed embodiments being indicated by the following claims.
| # | Name | Date |
|---|---|---|
| 1 | 201721010809-IntimationOfGrant11-01-2024.pdf | 2024-01-11 |
| 1 | Form 3 [27-03-2017(online)].pdf | 2017-03-27 |
| 2 | 201721010809-PatentCertificate11-01-2024.pdf | 2024-01-11 |
| 2 | Form 20 [27-03-2017(online)].jpg | 2017-03-27 |
| 3 | Form 18 [27-03-2017(online)].pdf_472.pdf | 2017-03-27 |
| 3 | 201721010809-FER.pdf | 2021-10-18 |
| 4 | Form 18 [27-03-2017(online)].pdf | 2017-03-27 |
| 4 | 201721010809-CLAIMS [01-03-2021(online)].pdf | 2021-03-01 |
| 5 | Drawing [27-03-2017(online)].pdf | 2017-03-27 |
| 5 | 201721010809-COMPLETE SPECIFICATION [01-03-2021(online)].pdf | 2021-03-01 |
| 6 | Description(Complete) [27-03-2017(online)].pdf_473.pdf | 2017-03-27 |
| 6 | 201721010809-FER_SER_REPLY [01-03-2021(online)].pdf | 2021-03-01 |
| 7 | Description(Complete) [27-03-2017(online)].pdf | 2017-03-27 |
| 7 | 201721010809-OTHERS [01-03-2021(online)].pdf | 2021-03-01 |
| 8 | Form 26 [06-05-2017(online)].pdf | 2017-05-06 |
| 8 | Abstract1.jpg | 2018-08-11 |
| 9 | 201721010809-ORIGINAL UNDER RULE 6(1A)-12-05-2017.pdf | 2017-05-12 |
| 9 | Other Patent Document [08-05-2017(online)].pdf | 2017-05-08 |
| 10 | 201721010809-ORIGINAL UNDER RULE 6(1A)-12-05-2017.pdf | 2017-05-12 |
| 10 | Other Patent Document [08-05-2017(online)].pdf | 2017-05-08 |
| 11 | Abstract1.jpg | 2018-08-11 |
| 11 | Form 26 [06-05-2017(online)].pdf | 2017-05-06 |
| 12 | 201721010809-OTHERS [01-03-2021(online)].pdf | 2021-03-01 |
| 12 | Description(Complete) [27-03-2017(online)].pdf | 2017-03-27 |
| 13 | 201721010809-FER_SER_REPLY [01-03-2021(online)].pdf | 2021-03-01 |
| 13 | Description(Complete) [27-03-2017(online)].pdf_473.pdf | 2017-03-27 |
| 14 | 201721010809-COMPLETE SPECIFICATION [01-03-2021(online)].pdf | 2021-03-01 |
| 14 | Drawing [27-03-2017(online)].pdf | 2017-03-27 |
| 15 | 201721010809-CLAIMS [01-03-2021(online)].pdf | 2021-03-01 |
| 15 | Form 18 [27-03-2017(online)].pdf | 2017-03-27 |
| 16 | 201721010809-FER.pdf | 2021-10-18 |
| 16 | Form 18 [27-03-2017(online)].pdf_472.pdf | 2017-03-27 |
| 17 | 201721010809-PatentCertificate11-01-2024.pdf | 2024-01-11 |
| 17 | Form 20 [27-03-2017(online)].jpg | 2017-03-27 |
| 18 | Form 3 [27-03-2017(online)].pdf | 2017-03-27 |
| 18 | 201721010809-IntimationOfGrant11-01-2024.pdf | 2024-01-11 |
| 1 | 2020-08-3015-25-46E_30-08-2020.pdf |