Abstract: The present invention an AI (Artificial Intelligence) based surveillance system and a method of monitoring shoplifting activity in megastores. The system includes a camera (101), a posture generation module, a skeleton pre-processing module, a feature evaluation module, a feature reduction module, an activity classification module, a monitoring module (102) and an alert generating module (103). Through the present system, the surveillance by using a camera-based system with a height mounted camera (101) placed at a particular distance to capture the real-time video stream. The same video stream is then processed in real-time by a video processing subsystem to identify whether any person is involved in shoplifting act or not. Further, it generates a warning message on the monitoring screen and an alert when shoplifting occurs as a consequent outcome of the system.
The present invention, in general, relates to an AI (Artificial Intelligence) based surveillance system and a method of monitoring shoplifting activity in megastores.
BACKGROUND OF THE INVENTION
Nowadays, theft of retail products from megastores/shops by the customers is increasing massively. Some buyers commit theft by concealing the items in bag or under clothes, when no one is observing. They then leave the store/shop without paying for it. The individual involved in these thefts is called a shoplifter and the act is entitled shoplifting. In simple terms, shoplifting is an act of taking items from the open retail establishment with no intention of paying, in a way that he isn’t getting noticed for it. Shoplifters are the foremost menace to retailers, which causes a massive loss to the business.
Shoplifting is a kind of suspicious activity in which the person does not behave normally. For example, a person’s expected normal behavior in a megastore is recorded like walking, examining items, billing, talking, etc. On the contrary, the actions involved in shoplifting are different from the normal ones, which are recorded like putting items under clothing or in the bags in a way when security personals are not seeing them. National Retail Federation (NRF-2020) report still sees shoplifting as a foremost cause for influencing inventory shrinkage, where the inventory shrinkage is a merchandise loss due to shoplifting, error, fraud, employee theft, etc. Shoplifting causes tremendous losses and disruptions for retailers. Therefore, an early earnestness is needed on this issue to minimize the losses caused to retailers.
In existing times, the retailers are using CCTV surveillance infrastructure for keeping their eyes on each buyer. Here, the control room personals attentively examine the video stream received from CCTV infrastructure. Due to the human’s limited strength, one can’t keep their eye to see an array of video frames very long and lapses are usually possible. This process reduces losses to some extent but is still not very effective in dealing with shoplifting as it requires manual intervention. Therefore, an automated CCTV-based surveillance system could be the right solution to automatically monitor each person’s activity in the stores/shops.
Further, there are several patent applications based on the monitoring system, one such prior art is CN102881100B entitled entity storefront anti-theft monitoring method based on video analysis. The invention discloses a method in which the movement locus of customer in entity storefront is tracked by video analysis, and hand and upper links movements of the customer in entity storefront locality are recognized, and the commodity change before and after the locality is left with reference to customer, judge whether customer has doubtful pilferage behaviour. When judging that customer has doubtful stealing, the customer is persistently tracked, when customer will leave entity storefront, prompting is sent to attendant in shop or Security Personnel.
However, the available prior arts are still not very effective in dealing with shoplifting.
Therefore, the present invention provides an autonomous and intelligent solution for shoplifting detection in megastores to overcome the shortcomings being faced in the prevalent approach, as discussed above. Through the present system, the surveillance is done by using a camera-based system with a height mounted camera placed at a particular distance to capture a real-time video stream. The same video stream is then processed in real-time by a video processing subsystem to identify whether any person is involved in shoplifting act or not. Further, it generates a warning message on the screen and an alert when shoplifting occurs as a consequent outcome of the system.
OBJECTIVES OF THE INVENTION
The prime objective of the present invention is to provide an AI based surveillance system for shoplifting detection in megastores.
Another objective of the present invention is to provide a camera-based system with a height mounted camera placed at a particular distance to capture the real-time video stream.
Another objective of the present invention is to provide a system with video processing subsystem to identify whether any person is involved in shoplifting act or not.
Another objective of the present invention is to provide a method for shoplifting detection in megastores based on AI surveillance system.
These and other objectives of the present invention will be apparent from the drawings and descriptions herein. Every objective of the invention is attained by at least one embodiment of the present invention.
BRIEF DESCRIPTION OF THE DRAWINGS
The drawings described herein are for illustrative purposes for selected embodiments only and not all possible implementations. The amount of detail offered is not intended to limit the scope of the present disclosure. Further objectives and advantages of this invention will be more apparent from the ensuing description when read in conjunction with the accompanying drawing and wherein:
Figure 1 represents a workflow of surveillance system of the present invention.
Figure 2 represents the surveillance scene of the present invention.
Figure 3 represents the architectural view of the surveillance system design.
It will be recognized by the person of ordinary skill in the art, given the benefit of the present disclosure, that the examples shown in the figures are not necessarily drawn to scale. Certain features or components may have been enlarged, reduced, or distorted to facilitate a better understanding of the illustrative aspects and examples disclosed herein. In addition, the use of shading, patterns, dashes, and the like in the figures is not intended to imply or mean any particular material or orientation unless otherwise clear from the context.
DETAILED DESCRIPTION OF THE INVENTION
The embodiments herein and the various features and advantageous details thereof are explained more comprehensively with reference to the non-limiting embodiments that are detailed in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein.
The following description includes the preferred best mode of one embodiment of the present invention. It will be clear from this description of the invention that the invention is not limited to these illustrated embodiments but that the invention also includes a variety of modifications and embodiments thereto. Therefore, the present description should be seen as illustrative and not limiting.
Unless otherwise specified, all terms used in disclosing the invention, including technical and scientific terms, have the meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. By means of further guidance, term definitions may be included to better appreciate the teaching of the present invention.
As used in the description herein, the meaning of “a,” “an,” and “the” includes plural reference unless the context clearly dictates otherwise. Also, as used in the description herein, the meaning of “in” includes “in” and “on” unless the context clearly dictates otherwise.
One embodiment of the present invention is an AI (Artificial Intelligence) based surveillance system which includes a camera (101), a posture generation module, a skeleton pre-processing module, a feature evaluation module, a feature reduction module, an activity classification module, a monitoring module (102) and an alert generating module (103).
Another embodiment of the present invention is a method of monitoring a shoplifting activity in megastores through the AI (Artificial Intelligence) based surveillance system. The method includes the steps of a) capturing a real time video sequence through the camera (101), b) extracting and processing a human skeleton from the captured video sequences of step (a) through a posture generation module, c) discarding irrelevant frames, choosing selective joints, coordinate scaling and estimating missing joints of the human skeleton of step (b) through a skeleton pre-processing module, d) extracting the relevant features from the encoded skeletons of step (c) and fuse the features through a feature evaluation process, (e) transforming the large size features set extracted in step (d) into compact-sized features by suppressing redundant and insignificant information through a feature reduction module, and (f) classifying the sequences of step (e) through an activity classification module.
The present invention will now be explained with the preferred embodiment of the system and the method in reference to the figures.
Figure 1 represents the surveillance scene and person’s movement scenarios based on the system of the present invention. It includes a camera (101) which can be of any type, a monitoring module, in an example computer system (102) to analyse the data captured through the camera (101), and an alert module (103) to provide alert in case of shoplifting is detected. The alert module is an alarm or sound to provide the alert.
Figure 2 shows the workflow of the surveillance system for identifying shoplifting events in megastores/shops. The videos are captured through the camera (101) as shown in figure 1 which is mounted on height at a distance to capture a real time video sequence/stream. The captured video sequences (104) from the camera (101) are passed first to the image pre-processing module (105) to extract and process a human skeleton. In the next stage, the system examines each frame of the human skeleton and checks whether there is a human; if yes, the system analyzes the human’s activity. Finally, the system generates an alert in case of shoplifting activity is detected.
Figure 3 presents the architectural view of the surveillance system of the present invention. The method uses video sequences/frames captured from the CCTV camera (101) as input and passes them to the posture generation module. The Posture Generation module extracts skeletal information from the input sequence using Open Pose algorithm. The extracted skeleton information containing body joints is made input to the Skeleton Pre-processing module, where four important tasks i.e. discarding irrelevant frames, choosing selective joints, coordinate scaling and estimation of missing joints are performed. In the next step, Feature Evaluation module extracts relevant features like spatial, motion, angle and distance features which are calculated from the N encoded skeletons from the skeleton pre-processing module and fused accordingly. These are subsequently used by the feature reduction process. The Feature Reduction module uses principal component analysis (PCA) to transform the large-size feature set into compact-sized features by suppressing redundant and insignificant information. Finally, the transformed features are passed to Activity Classification module to classify these sequences into respective classes (i.e., normal and shoplifting).
Posture Generation Module:
Posture Generation is an essential module of the present invention that uses OpenPose algorithm to extract skeleton information of a person. OpenPose localizes the 2D body joints present in the human skeleton. In this, first feature maps are extracted from inputted RGB images through a baseline CNN. A multi-stage CNN then evaluates Part Confidence Maps (PCM) and Part Affinity Field (PAF). The use of PCM is that any given pixel can specify a particular body component in two-dimensional space, and PAF comprises a series of 2D vectors that encode the orientation and position of people's limbs in the image. The last stage parses the confidence maps and affinity fields using a greedy bipartite matching algorithm to get the 2D key points of each person in an image. In an exemplary, the system extracts the spatial location of 18 body joints using OpenPose which shows as follows:
Further, the extracted joint’s positions are analyzed to identify the actions performed by an individual in a video sequence.
Skeleton Pre-processing Module
The raw skeleton data containing 18 body joints is not directly used for prominent feature extraction to avoid inaccurate training and inconsistent outcomes. A four-step pose encoding of the present invention helps to polish the prominent feature extraction process. Here, pose encoding is done in four steps, which is as follows:
It first discards those frames where Openpose detects no human skeleton or the detected skeleton has no neck or thigh.
It then removes all head and leg joints. The human skeleton generated by OpenPose represents 18 body joints. However, the position of the head and legs does not contribute much to the shoplifting kind of activities and thus the present system removes five points of the head (i.e., one head, two ears and two eyes) and two points of the legs (i.e., two ankles and two knees) from the skeleton data. This helps in reducing feature size and overall computability. The final skeleton has nine body joints: one neck, two shoulders, two elbows, two wrists, and two hips. The coordinates of these joints are as follows as per the exemplary shown above:
Further, coordinate scaling is performed to transform the coordinates into same units. The joint’s position obtained by the preceding step comprises different units of x and y coordinates, therefore, scaling is required to transform these coordinates into the same units by the following equation:
Input: Skeleton S of Size N
S={x_0^ ,y_0^ ,x_1^ ,y_1^ ,x_2^ ,y_2^ …, x_L^ ,y_L^ }
Output: Scaled coordinate of skeleton x
Function Scale_body_offset(x):
S = S.copy()
px0, py0 = get_joint(S, NECK)
S[0 : : 2] = S[0 : : 2] - px0
S[1 : : 2] = S[1 : : 2] - py0
Return S
The above equation shows the coordinate scaling procedure, where it takes a skeleton (S) with sets of body joints as input and produces scaled body joints as output. The scaling positions the neck joint at the origin (0, 0) by subtracting the coordinate value of the neck joint from the coordinate values of all the body joints. The scaled body joints at the 2D plane as per the exemplary are as follows:
And finally it fills the missing joints through its relative positioning w.r.t. the previous frames in the sequence. Equations 1 and 2 present the calculation of missing joint coordinates (x_i^ ,y_i^ ).
x_i^ =x_0^ + x_i^ _prev ----- (1)
? y?_i^ =y_0^ + y_i^ _prev ----- (2)
Here (x_0^ ,y_0^ ) is the spatial location of the neck joint in current frame and (x_i^ _prev,y_i^ _prev) is the set of the spatial locations of missing joints in the previous frame.
Feature Evaluation Module
Feature evaluation is the next essential module of the present invention in which pertinent features are extracted from pre-processed skeleton data. This section presents the feature-encoding schemes using pose encoded skeleton data containing 9 joints each. Here, a window of size N is taken to represent N pre-processed skeleton frames that encode a complete human act. Let S={S_0^ ,S_1^ , S_2^ ,S_3^ , …, S_N^ }, where S is the set of skeletons for N frames. Each point in the skeleton is represented by(x_i,y_i).
To get unique features from the skeletons and their body parts, the feature’s calculation using N frames is given below:
Extract spatial features by normalizing body joints, as shown in equation 3.
Xnorm=S_i/Mean(Height H of each skeleton) , Where 0=i=N -- (3)
The normalized body joints (Xnorm) are calculated by normalizing each body joint for N skeletons. It is so done by dividing the scaled body offset by the mean height of N skeletons.
Further, the height of each skeleton is calculated as shown below, where the distance between neck and hips is calculated using Euclidean distance:
Get Skeleton Height H
Input: Skeleton x of Size L
X={x_0^ ,y_0^ ,x_1^ ,y_1^ ,x_2^ ,y_2^ …, x_L^ ,y_L^ }
Output: Height H of the skeleton
Function Get_Body_Height(x):
x0,y0 ? Get_Joint(X,NECK)
x11,y11 ? Get_Joint(X,L_HIP)
x12,y12 ? Get_Joint(X,R_HIP)
if y11 == NaN and y12 == NaN:
Return 1.0
if y11 == NaN:
x1,y1 ? x12,y12
elif y12 == NaN:
x1,y1 ? x11,y11
else:
x1,y1 ? ((x11 + x12))/2,((y11 + y12))/2
H ?v(2&?(x0-x1)?^2+?(y0-y1)?^2 )
Return H
Extract motion features: The motion between body joints (V) is calculated by subtracting the joint position in the current frame with the joint position in the previous frame, as shown in equation 4.
V=X[S_N ]-X[S_(N-1) ] ----- (4)
Extract angular features: Assume (x_j,y_j) and (x_k,y_k) are the two points in the skeleton, the angle (?) between two body points can be obtained through equation 5. The body points for which angles are calculated are presented in Table 1.
?=tan^(-1)??((x_j-x_k))/((y_j-y_k))? ----- (5)
Extract distance features by calculating length of each limb (L) which is simply the Euclidian distance between two body points, as shown in equation 6.
L=v(2&?(x_k-x_j)?^2+?(y_k-y_j)?^2 ) ----- (6)
Table 1 presents the body points for which length parameter between two body joints, i.e. (x_j,y_j) and (x_k,y_k), or limb is calculated.
Finally, extracted features obtained from the above-mentioned feature encoding schemes are concatenated to represent final features, as shown in equation 7. Further, these concatenated features are processed for dimensionality reduction.
features=[Xnorm, V,?, L] ----------- (7)
Table 1: Angle and distance calculation between points
Angle (?) and Distance (L) Part 1 Part 2
1 Neck Right Shoulder
2 Neck Left Shoulder
3 Right Shoulder Right Elbow
4 Left Shoulder Left Elbow
5 Right Elbow Right Wrist
6 Left Elbow Left Wrist
7 Right Wrist Right Hip
8 Left Wrist Left Hip
The summary related to extracted features is presented in Table 2. As the value of N is assigned to be 145, the dimensions of extracted features obtained by Xnorm, V, ? and L are 2610, 2592, 1160 and 1160, respectively. Therefore, the total feature length to represent an action in N frames is 7552. Further, this feature set is supplied to the dimensionality reduction module to select prominent features.
Table 2: Features Summary
Features Representation Evaluation Vectors
Normalized body joints (Xnorm) 9 joints × 2 position/joint × (N - 1)frames 2610
Motion between body joints (V) 9 joints × 2 position/joint × (N - 1)frames 2592
Angle between body joints (?) 8 length × N frames 1160
Distance between body joints (L) 8 length × N frames 1160
Total feature vectors [Xnorm, V,?, L] 7522
Feature Reduction Module/Dimensionality Reduction
After getting pertinent features evaluated over the postural data, dimensionality reduction module is used least significant features from the data as these might degrade the model's accuracy. The present invention uses Principal Component Analysis (PCA), an unsupervised feature transformation technique, for dimensionality reduction. PCA identifies patterns in data based on the correlation between features and transforms them into new subspace with a lower dimensionality without sacrificing any important information.
It is common to speed up machine learning algorithms by eliminating correlated variables that do not contribute to any decision-making. PCA identifies patterns in data based on the correlation between features and transforms them into new subspace with a lower dimensionality without sacrificing any important information. In essence, it constructs an m×n dimensional transformation matrix W that maps a sample vector/features x onto a new k–dimensional feature subspace, as shown in equation 8.
+ ¦(x=[x_1,x_2, . . .,x_m ], x?R^m@?xW,W?R^((m×n) )@z=[z_1,z_2, . . .,z_m ], z?R^n ) } ----------- (8)
As a result of remodeling m-dimensional input data into n-dimensional subspace (where m<=n), the principal components are arranged in order of descending variance values given the constraint that each principal component is uncorrelated to other components. Finally, these components are used to represent the features that are further used to build the deep neural architecture for classification.
Activity Classification Module
The last module is the Activity Classification that reduces features sets and performs classification task to label the human activity into shoplifting or normal. Following the deep CNN architecture, the present invention replaces the functionality of convolution and max-pooling operations with the present method to extract pertinent features and then uses fully connected feed-forward neural network (FFNN) architecture to learn and perform the classification task. FFNN is a supervised learning approach that learns a function f(·):R^n?R^o by training on a dataset. Here, n and o represent dimensions for input and output, respectively. The present invention utilizes FFNN with one input layer, three hidden layers and one output layer. The size of the input layer is 4800 that represents input features. The hidden layers I, II and III consist of 1024, 512 and 256 neurons, respectively. Other side, the output layer contains two neurons that represent two classes, namely normal and shoplifting. The FFNN is hyper tuned with Tanh activation function in hidden layers and sigmoid activation function in the output layer. Adam optimizer is used to optimize weights of the neural network and the learning rate is 0.001.
6. Results:
In the test example, two video have been captured with the different activities of the person i.e. normal and shoplifting. The accuracy results are tabulated in the Table 3, where the processing was done offline on two captured videos.
The results are derived both for naked face and covered face conditions using the system of the present invention.
Table 3: Performance measurements for the present invention
Confusion Metrix Measures Values
15 (TP) 0 (FN) Positive Predictive Value (PPV) 0.937
True Positive Rate (TPR) 1.00
1 (FP) 16 (TN) False Positive Rate (FPR) 0.058
Accuracy 96.87%
The evaluation parameters used are True Positive (TP), False Positive (FP), True Negative (TN) and False Negative (FN). The evaluation metrics based on these parameters are Positive Predictive Value (PPV), True Positive Rate (TPR), False Positive Rate (FPR) and Accuracy. These are defined as follows:
TP – number of frames in which human present is correctly detected as human
FP – number of frames in which human not present is detected as human
TN – number of frames in which human not present is not detected as human
FN – number of frames in which human present is not detected as human
PPV= TP/(TP+FP) ----------- (9)
TPR= TP/(TP+FN) ----------- (10)
FPR= FP/(FP+TN) ----------- (11)
Accuracy= (TP+TN)/(TP+FN+TN+FP) ------------ (12)
Advantages of the Present Invention
The present invention provides human activity recognition in typical store scenes where image/video captured are affected by various noises occurring due to illumination variations and occluded environments.
The system of the present invention requires least infrastructural for the surveillance site setup. This system is designed to work even with a single camera and can monitor a long range of field of view in the megastores.
The present system identifies shoplifting actions accurately w.r.t. the person movement.
Although the invention has been described with reference to specific embodiments, this description is not meant to be construed in a limiting sense. Various modifications of the disclosed embodiments, as well as alternate embodiments of the invention, will become apparent to persons skilled in the art upon reference to the description of the invention. It is therefore contemplated that such modifications can be made without departing from the scope of the present invention as defined.
We Claim:
1. An AI (Artificial Intelligence) based surveillance system to monitor shoplifting activity in megastore wherein the system comprising:
a camera (101) mounted on height at a distance to capture a real time video sequence;
a posture generation module, wherein the posture generation module extracts and process a human skeleton from the captured video sequences;
a skeleton pre-processing module to discard irrelevant frames of the extracted human skeleton, choosing selective joints, coordinate scaling and filling missing joints of the human skeleton to get an encoded skeleton;
a feature evaluation module to extract the relevant features from the encoded skeletons and fuse the relevant features;
a feature reduction module to transform the large size feature set received from the feature evaluation module into compact sized features by suppressing redundant and insignificant information;
an activity classification module to classify the final sequences received from the feature reduction module;
a monitoring module (102); and
an alert generating module (103).
2. The system as claimed in claim 1, wherein the posture generation module extracts the skeleton information of a person and localizes the 2D body joints present in the human skeleton.
3. The system as claimed in claim 1, wherein the skeleton pre-processing module comprises:
• discarding the frames which are without human skeleton, neck and thigh,
• choosing the selective joints to remove five points of the head and two points of the leg to have the final skeleton with nine body joints, one neck, two shoulders, two elbows, two wrists, and two hips,
• coordinate scaling by subtracting the coordinate value of the neck joint from the coordinate values of all the body joints, and
• filling the missing joints in a current frame which are calculated through its relative positioning with respect to the previous frames in the sequence.
4. The system as claimed in claim 1, wherein the extracting of the relevant features in the feature evaluation module comprises:
• extracting spatial features by normalizing body joints,
• extracting motion features by subtracting the joint position in the current frame with the joint position in the previous frame,
• extracting angular features by calculating angles between body joints, and
• extracting distance features by calculating length of each limb.
5. The system as claimed in claim 1, wherein the feature reduction module comprises a Principal Component Analysis which identifies patterns in data based on the correlation between features and transforms them into new subspace with a lower dimensionality without sacrificing any important information.
6. A method of AI (Artificial Intelligence) based surveillance system to monitor shoplifting activity in megastore wherein the method comprising the steps of:
a) capturing a real time video sequence through a camera (101);
b) extracting and processing a human skeleton from the captured video sequences of step (a) through a posture generation module;
c) discarding irrelevant frames, choosing selective joints, coordinate scaling and filling missing joints of the human skeleton of step (b) through a skeleton pre-processing module;
d) extracting the relevant features from the encoded skeletons of step (c) and fuse the features through a feature evaluation process;
e) transforming the large size features set extracted in step (d) into compact sized features by suppressing redundant and insignificant information through a feature reduction module; and
f) classifying the sequences of step (e) through an activity classification module which are analysed into a monitoring module (102).
7. The method as claimed in claim 6, wherein the posture generation module extracts the skeleton information of a person and localizes the 2D body joints present in the human skeleton.
8. The method as claimed in claim 6, wherein the skeleton pre-processing module comprises:
• discarding the frames which are without human skeleton, neck and thigh,
• choosing the selective joints to remove five points of the head and two points of the leg to have the final skeleton with nine body joints, one neck, two shoulders, two elbows, two wrists, and two hips,
• coordinate scaling by subtracting the coordinate value of the neck joint from the coordinate values of all the body joints, and
• filling the missing joints in a current frame which are calculated through its relative positioning with respect to the previous frames in the sequence.
9. The method as claimed in claim 6, wherein the extracting of the relevant features in the feature evaluation module comprises:
• extracting spatial features by normalizing body joints,
• extracting motion features by subtracting the joint position in the current frame with the joint position in the previous frame,
• extracting angular features by calculating angles between body joints, and
• extracting distance features by calculating length of each limb.
10. The method as claimed in claim 6, wherein the feature reduction module comprises a Principal Component Analysis (PCA) which identifies patterns in data based on the correlation between features and transforms them into new subspace with a lower dimensionality without sacrificing any important information.
| Section | Controller | Decision Date |
|---|---|---|
| # | Name | Date |
|---|---|---|
| 1 | 202211023152-IntimationOfGrant23-02-2024.pdf | 2024-02-23 |
| 1 | 202211023152-STATEMENT OF UNDERTAKING (FORM 3) [20-04-2022(online)].pdf | 2022-04-20 |
| 2 | 202211023152-PatentCertificate23-02-2024.pdf | 2024-02-23 |
| 2 | 202211023152-POWER OF AUTHORITY [20-04-2022(online)].pdf | 2022-04-20 |
| 3 | 202211023152-Written submissions and relevant documents [23-11-2023(online)].pdf | 2023-11-23 |
| 3 | 202211023152-FORM-9 [20-04-2022(online)].pdf | 2022-04-20 |
| 4 | 202211023152-FORM-26 [09-11-2023(online)].pdf | 2023-11-09 |
| 4 | 202211023152-FORM 18A [20-04-2022(online)].pdf | 2022-04-20 |
| 5 | 202211023152-FORM 1 [20-04-2022(online)].pdf | 2022-04-20 |
| 5 | 202211023152-Correspondence to notify the Controller [07-11-2023(online)].pdf | 2023-11-07 |
| 6 | 202211023152-FORM 3 [18-10-2023(online)].pdf | 2023-10-18 |
| 6 | 202211023152-FIGURE OF ABSTRACT [20-04-2022(online)].jpg | 2022-04-20 |
| 7 | 202211023152-US(14)-HearingNotice-(HearingDate-08-11-2023).pdf | 2023-10-12 |
| 7 | 202211023152-DRAWINGS [20-04-2022(online)].pdf | 2022-04-20 |
| 8 | 202211023152-FORM 3 [19-04-2023(online)].pdf | 2023-04-19 |
| 8 | 202211023152-DECLARATION OF INVENTORSHIP (FORM 5) [20-04-2022(online)].pdf | 2022-04-20 |
| 9 | 202211023152-COMPLETE SPECIFICATION [20-04-2022(online)].pdf | 2022-04-20 |
| 9 | 202211023152-FORM 3 [19-10-2022(online)].pdf | 2022-10-19 |
| 10 | 202211023152-FER_SER_REPLY [03-10-2022(online)].pdf | 2022-10-03 |
| 10 | 202211023152-Others-050522.pdf | 2022-05-06 |
| 11 | 202211023152-GPA-050522.pdf | 2022-05-06 |
| 11 | 202211023152-OTHERS [03-10-2022(online)].pdf | 2022-10-03 |
| 12 | 202211023152-FER.pdf | 2022-05-31 |
| 12 | 202211023152-Form-5-050522.pdf | 2022-05-06 |
| 13 | 202211023152-Correspondence-050522.pdf | 2022-05-06 |
| 14 | 202211023152-FER.pdf | 2022-05-31 |
| 14 | 202211023152-Form-5-050522.pdf | 2022-05-06 |
| 15 | 202211023152-GPA-050522.pdf | 2022-05-06 |
| 15 | 202211023152-OTHERS [03-10-2022(online)].pdf | 2022-10-03 |
| 16 | 202211023152-FER_SER_REPLY [03-10-2022(online)].pdf | 2022-10-03 |
| 16 | 202211023152-Others-050522.pdf | 2022-05-06 |
| 17 | 202211023152-FORM 3 [19-10-2022(online)].pdf | 2022-10-19 |
| 17 | 202211023152-COMPLETE SPECIFICATION [20-04-2022(online)].pdf | 2022-04-20 |
| 18 | 202211023152-DECLARATION OF INVENTORSHIP (FORM 5) [20-04-2022(online)].pdf | 2022-04-20 |
| 18 | 202211023152-FORM 3 [19-04-2023(online)].pdf | 2023-04-19 |
| 19 | 202211023152-US(14)-HearingNotice-(HearingDate-08-11-2023).pdf | 2023-10-12 |
| 19 | 202211023152-DRAWINGS [20-04-2022(online)].pdf | 2022-04-20 |
| 20 | 202211023152-FORM 3 [18-10-2023(online)].pdf | 2023-10-18 |
| 20 | 202211023152-FIGURE OF ABSTRACT [20-04-2022(online)].jpg | 2022-04-20 |
| 21 | 202211023152-FORM 1 [20-04-2022(online)].pdf | 2022-04-20 |
| 21 | 202211023152-Correspondence to notify the Controller [07-11-2023(online)].pdf | 2023-11-07 |
| 22 | 202211023152-FORM-26 [09-11-2023(online)].pdf | 2023-11-09 |
| 22 | 202211023152-FORM 18A [20-04-2022(online)].pdf | 2022-04-20 |
| 23 | 202211023152-Written submissions and relevant documents [23-11-2023(online)].pdf | 2023-11-23 |
| 23 | 202211023152-FORM-9 [20-04-2022(online)].pdf | 2022-04-20 |
| 24 | 202211023152-POWER OF AUTHORITY [20-04-2022(online)].pdf | 2022-04-20 |
| 24 | 202211023152-PatentCertificate23-02-2024.pdf | 2024-02-23 |
| 25 | 202211023152-IntimationOfGrant23-02-2024.pdf | 2024-02-23 |
| 25 | 202211023152-STATEMENT OF UNDERTAKING (FORM 3) [20-04-2022(online)].pdf | 2022-04-20 |
| 1 | 202211023152E_13-05-2022.pdf |