Sign In to Follow Application
View All Documents & Correspondence

A Device And Process For Automatic Beverage Delivery

Abstract: The present disclosure discloses a device and process for automatic beverage delivery. The device includes a tray of cups filled with beverage that is carried to people either assembled in a hall or working at their desks in an office. The device is capable of avoiding obstacles during its motion/movement, it can follow a line marked on floor, detect hands when people pick up the beverage cups, distinguish between person and inanimate object while navigating and can also return to the filling station on detecting an empty tray or after a pre-determined amount of time. It has several modes of operation catering to different use cases. A user can select from these modes to achieve desired movement form the device.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
20 March 2014
Publication Number
39/2015
Publication Type
INA
Invention Field
MECHANICAL ENGINEERING
Status
Email
dewan@rkdewanmail.com
Parent Application
Patent Number
Legal Status
Grant Date
2021-07-08
Renewal Date

Applicants

TATA CONSULTANCY SERVICES LIMITED
Nirmal Building, 9th Floor, Nariman Point, Mumbai – 400 021.Maharashtra, India

Inventors

1. KUMAR SWAGAT
TCS,154B, Block A, Sector-63, Noida, Uttar Pradesh - 201301, India
2. GARG SOURAV
TCS,154B, Block A, Sector-63, Noida, Uttar Pradesh - 201301, India
3. KEJRIWAL NISHANT
TCS,154B, Block A, Sector-63, Noida, Uttar Pradesh - 201301, India

Specification

CLIAMS:1. A beverage serving ambulatory device adapted to move on a floor in a pre-determined environment, said device comprising:
i. a wheeled base adapted to permit movement of said device;
ii. at least one motor adapted to drive the wheeled base;
iii. a frame mounted on said wheeled base;
iv. a tray fitted on the frame and adapted to removably hold filled and empty beverage cups;
v. a processor mounted on the frame and adapted to receive signals, said processor having a controller adapted to control the movement of the motor in response to commands received from the processor to control start and stop, and back and forth movement of said device on the floor;
vi. a mode selector adapted to edit the command signals to be transmitted to the controller for the movement of the motor;
vii. a first image capturing tool mounted on said frame and adapted to capture first images comprising images surrounding the device and images of the tray for transmission to the processor;
viii. a second image capturing tool mounted on said frame and adapted to capture second images of the floor surrounding said device for transmission to the processor; and
ix. sensors mounted on said wheeled base and adapted to sense the presence of obstacles for transmission to the processor.
2. The device as claimed in claim 1, where said mode selector edits the command signals based on the mode selected by a user, said modes to be selected comprise:
• a path following mode wherein said device follows a pre-determined path, images of which are captured by said second image capturing tool;
• a wander mode wherein said device moves autonomously without colliding with surrounding objects based on the images captured by said first image capturing tool and also on the obstacles sensed by said sensors; and
• a human-following mode wherein said device follows a person walking in front of the device based on the images captured by said first image capturing tool.
3. The device as claimed in claim 1, where said floor has marked pre-defined paths and wherein the processor receives images of said paths and also receives the user selected mode and transmits commands to the controller for directing the motion of the device along said paths.
4. The device as claimed in claim 1, wherein the first images captured include images of a person’s hand approaching the tray and where the processor receives the images of the person’s hand and transmits commands to the controller for stopping the motion of the device for a pre-determined duration.
5. The device as claimed in claim 1, wherein the first images captured include images of an empty tray and where the processor receives the empty tray images and transmits commands to the controller to direct the motion of the device to a beverage re-filling station.
6. The device as claimed in claim 1, wherein the processor receives the sensed signals and also receives the user selected mode and transmits commands to the controller to direct the motion of the device to avoid obstacles.
7. The device as claimed in claim 1, wherein the first images captured include images of a person walking in front of the device and where the processor receives the images of the person walking in front of the device and also receives the user selected mode and transmits commands to the controller to direct the motion of the device to follow the person.
8. The device as claimed in claim 1, wherein said processor is further adapted to transmit commands to the controller to direct the motion of the device to a beverage re-filling station after a pre-determined duration.
9. The device as claimed in claim 1, wherein the device further comprises a greeting module adapted to receive commands from the processor and is further adapted to provide a greeting message through a speaker or a display, where type of the greeting message is selected from the group consisting of voice, video, image and text.
10. A process of serving beverages using an ambulatory device which is adapted to move on a floor in a pre-determined environment, said process includes:
• permitting movement of said device through the use of a wheeled base;
• driving the wheeled base by a motor;
• mounting a frame on the wheeled base;
• removably holding filled and empty beverage cups on a tray mounted on the wheeled base;
• mounting a processor on the frame;
• receiving signals and controlling the movement of the motor by transmitting commands to a controller to control start and stop, and back and forth movement of said device on the floor;
• selecting a mode to edit the command signals to be transmitted to the controller for the movement of the motor;
• capturing first images comprising images surrounding the device and images of the tray for transmission to the processor;
• capturing second images of the floor surrounding said device for transmission to the processor; and
• sensing the presence of obstacles for transmission to the processor.
11. The process as claimed in claim 10, wherein step of selecting a mode through said mode selector, modes included in said mode selector comprise:
• following a pre-determined path, images of which are captured by said second image capturing tool;
• moving autonomously without colliding with surrounding objects based on the images captured by said first image capturing tool and also on obstacles sensed by said sensors; and
• following a person walking in front of the device based on the images captured by said first image capturing tool.
12. The process as claimed in claim 10, wherein step of transmitting the commands to the controller for directing the motion of the device along said paths includes capturing images of marked pre-defined paths on the floor and receiving the images of said paths by the processor along with the user selected mode.
13. The process as claimed in claim 10, wherein the step of transmitting commands to the controller for halting the motion of the device for a pre-determined duration includes capturing first images of a person’s hand approaching the tray and receiving the captured images of the person’s hand.
14. The process as claimed in claim 10, wherein the step of transmitting commands to the controller to direct the motion of the device to a beverage re-filling station includes capturing first images of an empty tray and receiving the empty tray images.
15. The process as claimed in claim 10, wherein the step of transmitting commands to the controller to direct the motion of the device to avoid the obstacles includes receiving sensed signals reflected from said obstacles along with the user selected mode.
16. The process as claimed in claim 10, wherein the step of transmitting commands to the controller to direct the motion of the device to follow a person includes capturing and receiving first images of a person walking in front of the device along with the user selected mode.
17. The process as claimed in claim 10, wherein the process further includes step of transmitting commands to the controller to direct the motion of the device to a beverage re-filling station after a pre-determined duration.
18. The process as claimed in claim 10, wherein the process further includes step of providing a greeting message through a speaker or a display, where type of the greeting message is selected from the group consisting of voice, video, image and text.
,TagSPECI:FIELD OF THE DISCLOSURE

The present disclosure relates to the field of devices capable of beverage delivery.
DEFINITIONS OF TERMS USED IN THE SPECIFICATION
The expression ‘beverage cups’ used hereinafter in this specification refers to containers for holding/receiving beverages selected from the group consisting of cups, glasses, cans, bottles and mugs.
This definition is in addition to those expressed in the art.

BACKGROUND
Currently there are devices that prepare food items/ beverages in eateries. Some places have devices that deliver the food items to customers. These devices are usually stationary. Few restaurants have devices that are capable of taking order from the customers or serve the customers by placing the serving plates on conveyor belts. However, there is lack of devices that are specifically suited for an office or a party environment and those that will move freely to provide a service.
Therefore there is felt a need for a device that is capable of carrying beverages in an office or party environment such that the device can carry beverages to employees’ desks or to individuals present at the party by either following a track that is pre-set or by reaching the destination autonomously by avoiding obstacles.
OBJECTS
An object of the present disclosure is to provide a device for automatic beverage delivery.
Another object of the present disclosure is to provide a device that can detect obstacles in its path while moving autonomously when carrying the beverages to be served.
Yet, another object of the present disclosure is to provide a device that can follow a pre-determined path marked on the floor.
Still, another object of the present disclosure is to provide a device that can follow a person walking in front of the device.
Another object of the present disclosure is to provide a device that can detect a person’s hand approaching to collect the beverage cup and in turn halt and greet the person.
Yet, another object of the present disclosure is to provide a device that can return to beverage re-filling station after pre-determined duration or when the tray containing beverage cups is empty.
Other objects and advantages of the present disclosure will be more apparent from the following description when read in conjunction with the accompanying figures, which are not intended to limit the scope of the present disclosure.

SUMMARY
The present disclosure envisages a beverage serving ambulatory device that is adapted to move on a floor in a pre-determined environment.
Typically, in accordance with the present disclosure, the device includes:
• a wheeled base adapted to permit movement of the device;
• at least one motor adapted to drive the wheeled base;
• a frame mounted on the wheeled base;
• a tray fitted on the frame and adapted to removably hold filled and empty beverage cups;
• a processor mounted on the frame and adapted to receive signals, wherein the processor has a controller adapted to control the movement of the motor in response to commands received from the processor to control start and stop, and back and forth movement of the device on the floor;
• a mode selector adapted to edit the command signals to be transmitted to the controller for the movement of the motor;
• a first image capturing tool mounted on the frame and adapted to capture first images comprising images surrounding the device and images of the tray for transmission to the processor;
• a second image capturing tool mounted on the frame and adapted to capture second images of the floor surrounding the device for transmission to the processor; and
• sensors mounted on the wheeled base and adapted to sense the presence of obstacles for transmission to the processor.
Further, in accordance with the present disclosure, the mode selector present in the device, edits the command signals based on the mode selected by a user, modes to be selected comprise:
• a path following mode wherein the device follows a pre-determined path, images of which are captured by the second image capturing tool;
• a wander mode wherein the device moves autonomously without colliding with surrounding objects based on the images captured by the first image capturing tool and also on the obstacles sensed by the sensors; and
• a human-following mode wherein the device follows a person walking in front of the device based on the images captured by the first image capturing tool.
Furthermore, in accordance with the present disclosure, the floor has marked pre-defined paths and the processor receives images of these paths and also receives the user selected mode and transmits commands to the controller for directing the motion of the device along the paths.
Still further, in accordance with the present disclosure, the first images captured include images of a person’s hand approaching the tray and where the processor receives the images of the person’s hand and transmits commands to the controller for stopping the motion of the device for a pre-determined duration.
Additionally, in accordance with the present disclosure, the first images captured include images of an empty tray and where the processor receives the empty tray images and transmits commands to the controller to direct the motion of the device to a beverage re-filling station.
Further, in accordance with the present disclosure, the processor receives the sensed signals and also receives the user selected mode and transmits commands to the controller to direct the motion of the device to avoid obstacles.
Furthermore, in accordance with the present disclosure, the first images captured include images of a person walking in front of the device and where the processor receives the images of the person walking in front of the device and also receives the user selected mode and transmits commands to the controller to direct the motion of the device to follow the person.
Still further, in accordance with the present disclosure, the processor is also adapted to transmit commands to the controller to direct the motion of the device to a beverage re-filling station after a pre-determined duration.
Additionally, in accordance with the present disclosure, the device further comprises a greeting module adapted to receive commands from the processor and is further adapted to provide a greeting message through a speaker or a display, where type of the greeting message is selected from the group consisting of voice, video, image and text.
In accordance with the present disclosure, there is provided a process of serving beverages using an ambulatory device which is adapted to move on a floor in a pre-determined environment, the process comprises following steps:
• permitting movement of the device through the use of a wheeled base;
• driving the wheeled base by a motor;
• mounting a frame on the wheeled base;
• removably holding filled and empty beverage cups on a tray mounted on the wheeled base;
• mounting a processor on the frame;
• receiving signals and controlling the movement of the motor by transmitting commands to a controller to control start and stop, and back and forth movement of the device on the floor;
• selecting a mode to edit the command signals to be transmitted to the controller for the movement of the motor;
• capturing first images comprising images surrounding the device and images of the tray for transmission to the processor;
• capturing second images of the floor surrounding the device for transmission to the processor; and
• sensing the presence of obstacles for transmission to the processor.
Additionally, in accordance with the present disclosure, the device includes step of selecting a mode through the mode selector, modes included in the mode selector include:
• following a pre-determined path, images of which are captured by the second image capturing tool;
• moving autonomously without colliding with surrounding objects based on the images captured by the first image capturing tool and also on obstacles sensed by the sensors; and
• following a person walking in front of the device based on the images captured by the first image capturing tool.
Typically, in accordance with the present disclosure, the step of transmitting the commands to the controller for directing the motion of the device along the paths further includes step of capturing images of marked pre-defined paths on the floor and receiving the images of these paths by the processor along with the user selected mode.
Further, in accordance with the present disclosure, the step of transmitting commands to the controller for halting the motion of the device for a pre-determined duration includes step of capturing first images of a person’s hand approaching the tray and receiving the captured images of the person’s hand.
Furthermore, in accordance with the present disclosure, the step of transmitting commands to the controller to direct the motion of the device to a beverage re-filling station includes step of capturing first images of an empty tray and receiving the empty tray images.
Still further, in accordance with the present disclosure, the step of transmitting commands to the controller to direct the motion of the device to avoid the obstacles includes step of receiving sensed signals reflected from the obstacles along with the user selected mode.
Further, in accordance with the present disclosure, the step of transmitting commands to the controller to direct the motion of the device to follow a person includes step of capturing and receiving first images of a person walking in front of the device along with the user selected mode.
Additionally, in accordance with the present disclosure, the process further includes step of transmitting commands to the controller to direct the motion of the device to a beverage re-filling station after a pre-determined duration.
Furthermore, in accordance with the present disclosure, the process also includes step of providing a greeting message through a speaker or a display, where type of the greeting message is selected from the group consisting of voice, video, image and text.
BRIEF DESCRIPTION OF ACCOMPANYING DRAWINGS

The device and process for automatic beverage delivery will now be described with the help of the accompanying drawings, in which:
Figure 1 illustrates the schematic of an automatic beverage delivering device.
Figure 2 illustrates a perspective view of an exemplary embodiment of an automatic beverage delivery device according to the present disclosure.
DETAILED DESCRIPTION OF THE ACCOMPANYING DRAWINGS

A preferred embodiment of a system and method for non-intrusive human activity monitoring of the present disclosure will now be described in detail with reference to the accompanying drawings. The preferred embodiment does not limit the scope and ambit of the disclosure. The description provided is purely by way of example and illustration.

The embodiments herein and the various features and advantageous details thereof are explained with reference to the non-limiting embodiments in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
Referring to the accompanying drawings, Figure 1 illustrates a beverage serving moving device 100 adapted to move on a floor in a pre-determined environment. The beverage serving moving device 100 includes, a wheeled base 102 that allows the device to move freely in a desired environment. This wheeled base 102 is fitted with a frame 104 on which a tray 106 holding beverage cups is placed. The device 100 moves around while carrying the beverage cups and halts when a person approaches the tray 106 to pick a beverage cup. In one embodiment of the present disclosure, the device can greet the person when it senses the person’s hand approaching the tray 106. The device 100 consists of a motor 108 that drives the wheeled base in order to allow the device 100 to move freely in the environment.
The device includes a first image capturing tool 114 that is mounted on the frame 104. The first image capturing tool 114 captures images surrounding the device including the images that represent a person’s hand approaching the tray, an empty tray, detecting people and the like. These images are passed on to a processor 112 for further processing. A second image capturing tool 116 is also mounted on the frame to capture second images of the floor surrounding the device. These second images are the images of a pre-determined path that is marked on the floor of the environment. These images are then passed on to the processor 112 for further processing. The device 100 also includes sensors 118 that are mounted on the wheeled base 102 to sense the presence of obstacles during the movement of the device 100.
A mode selector 120 is included in the device 100 that allows a user to select the mode in which the device 100 needs to be operated. The three modes of operation are as explained:
• A path following mode – when this mode is selected, the processor 112 accepts the images from the second image capturing tool 116. These images represent the marked paths present on the floor of the environment. The processor 112 then commands a controller 110 to control the movement of motor 108 to follow the marked paths.
• A human following mode – when this mode is selected, the processor 112 accepts the images from the first image capturing tool 114, wherein the images of a person walking in front of the device 100 are captured. The processor 112 then commands the controller 110 to control the movement of the motor 108 to follow the person.
• A wander mode – when this mode is selected, the processor 112 accepts the images from the first image capturing tool 114 and the sensors 118. The processor 112 then commands the controller 110 to control the movement of the motor 108 to move around freely in the environment and avoid obstacles in its path.
The processor 112 processes the images and the sensed signals to control start and stop and back and forth movement of the device 100.
According to one embodiment of the present disclosure, the device can be configured to follow a color-based path that is pasted / embedded on the floor of a desired environment. For example, if we consider that a path on the floor is a red ribbon that is pasted on a dark carpet and a vertically mounted image capturing tool is provided to detect this path, according to the present disclosure, the image capturing tool will only capture a part of the total image (Region of Interest) in order to reduce the computation time. The capture Region of Interest (ROI) is then converted from “RGB” to “HSV”. As the path used is of red color, corresponding thresholds on the “hue” value are applied, and it is then converted into a binary image. This binary image is then filtered to remove noises and blob detection is used to find the blob on the binary image. The center of the blob is hence used for aligning the robot along the path and the error between the blob center and the center of ROI is reduced. A left or right turn is detected by computing the angle of the line segment connecting the center of the ROI to the center of the circle obtained from the blob. Similarly, end of a path is detected when no blobs are found in the selected ROI. Thus, centroid of blobs is used to generate motion commands for the device.
In another embodiment of the present disclosure, the device is able to detect hands of a person standing in front of the device based on a depth map and skin color. In this embodiment, the device can stop and greet the user after detection of the hands. To achieve that, detection of fingers and palm is not as crucial as detecting skin-color to identify a human hand. Depth values are obtained from the image capturing tool to remove the background and obtain a silhouette of the object. This object is identified as the human hand by detecting skin-color. A Gaussian Mixture Model (GMM) can be used to model the distribution of skin-color in HSV color space. The value component in HSV is not used so as to mitigate the effect of illumination on the skin-color detection. The GMM model can be trained off-line with EM algorithm using skin-color templates collected manually.
A binary hand detection (HD) flag is set or reset depending on whether a hand is detected or not. The device of the present disclosure is adapted to serve beverages to people present in a particular environment.
In one embodiment of the present disclosure, the device is expected to detect when the tray holding the beverage cups is empty in order to avoid the situation of serving beverages with an empty tray. To enable the detection of an empty tray, white paper cups as beverage cups which are placed on a dark colored tray are used in this embodiment. It provides a natural contrast between the cups and the tray. A support vector machine (SVM) is used to classify a given image into one of the two classes - empty or non-empty. The complexity associated with high dimensional input space, color-histograms are used as the feature vector training SVMs. Gaussian RBF as the kernel function is used for the SVM model. In this case, a gray-scale color histogram is used as the input feature vector. A first region of interest (ROI) of the tray from the image is selected and a normalized histogram on pixel values in gray-scale color space for this ROI is then created. When the grayscale histograms for an empty and a non-empty tray are observed, it is seen that presence of cups in the tray contributes to the bins on the brighter side of the grayscale range.
When an empty tray is detected, the empty tray (ET) flag is set. When ET flag is set, the device is directed to move towards a refill station. The refill station is usually at the end of the path being followed. In a wander mode, the refill station may be identified by detecting special color or pattern in the environment.
According to another embodiment, the device can be configured to operate in a human-following mode. It can carry loads that are required by people working in hospitals, restaurants, airports or shopping malls. Since the device will accompany a person at a close distance, it can be made to understand various human gestures. So, a human-following device can lead to more effective human-device interaction. In this embodiment the person being followed is detected in the images through the color. The distance between the detected object and the device is obtained from a depth map. This information is used for driving the device so as to maintain a constant distance from the person being followed. A color-based mean-shift tracker can be used as a primary tracker for the human torso. In order to achieve robustness against varying illumination condition another mean-shift tracker using SURF histograms can be used. These two trackers will run in parallel and correct each other whenever one of them fails to detect the object.
Referring to the accompanying drawings, Figure 2 illustrates a perspective view of an exemplary embodiment of an automatic beverage delivery device 200 according to the present disclosure. To allow movement of the device, a wheeled base 202 is used. An L-shaped frame 204 having a vertical wing (a) and a horizontal wing (b) (both wings having preferable length of 60-65cm) is mounted on top of the wheeled base 202 to house a tray 204 and an external processor 214 is placed on the (b) wing of the L-shaped frame 204. This processor 214 drives a motor (not shown in the figure) to enable movement of the device. A first image capturing tool 208 is mounted on the (a) wing of the L-shaped frame 204. A Microsoft Kinect camera can be used as a first image capturing tool 208 in order to capture images surrounding the device including the images that represent a person’s hand approaching the tray, an empty tray, detecting people and the like. A second image capturing tool 210 is attached to the (b) wing of the L-shaped frame 204 in order to capture the images of a floor surrounding the device. The second image capturing tool 210 can be a webcam that can detect marked paths on the floor. The device also includes a set of sensors 212 that are mounted on the wheeled base 202. SONAR arrays can be used as the sensors 212 for avoiding obstacles. The tray 204 holds beverage cups 206 in such a way that it is visible from the first image capturing tool 208 in order to detect situations like an empty tray, a person approaching the device and hands approaching the tray. The line of sight of the first image capturing tool 208 is horizontal to the ground. The first image capturing tool 208 also has a blind region in its depth map. It cannot give depth of any object that lies within a preferable distance of 55-60 cm from the camera. Hence, the tray has to be kept away from this blind zone. The second image capturing tool 210 is placed vertically downward and hence avoids any blind zone in detecting the line.
TECHNICAL ADVANCEMENTS

The technical advancements offered by the present disclosure include the realization of:
• a device for automatic beverage delivery;
• a device that can detect obstacles in its path while moving autonomously when carrying the beverages to be served;
• a device that can follow a pre-determined path marked on the floor;
• a device that can follow a person walking in front of the device;
• a device that can detect a person’s hand approaching to collect the beverage cup and in turn halt and greet the person; and
• a device that can return to beverage re-filling station after pre-determined duration or when the tray containing beverage cups is empty.
Throughout this specification the word “comprise”, or variations such as “comprises” or “comprising”, will be understood to imply the inclusion of a stated element, integer or step, or group of elements, integers or steps, but not the exclusion of any other element, integer or step, or group of elements, integers or steps.

The use of the expression “at least” or “at least one” suggests the use of one or more elements or ingredients or quantities, as the use may be in the embodiment of the disclosure to achieve one or more of the desired objects or results.

The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the embodiments as described herein.

Documents

Application Documents

# Name Date
1 T-Drgs.pdf 2018-08-11
2 T-3.pdf 2018-08-11
3 Form 2.pdf 2018-08-11
4 ABSTRACT1.jpg 2018-08-11
5 908-MUM-2014-HARD COPY OF 18(25-4-2014).pdf 2018-08-11
6 908-MUM-2014-FORM 26(15-5-2014).pdf 2018-08-11
7 908-MUM-2014-FORM 1(15-4-2014).pdf 2018-08-11
8 908-MUM-2014-CORRESPONDENDENCE(15-4-2014).pdf 2018-08-11
9 908-MUM-2014-CORRESPONDENCE(15-5-2014).pdf 2018-08-11
10 908-MUM-2014-FER.pdf 2018-11-26
11 908-MUM-2014-FORM-26 [19-12-2018(online)].pdf 2018-12-19
12 908-MUM-2014-OTHERS [05-02-2019(online)].pdf 2019-02-05
13 908-MUM-2014-FER_SER_REPLY [05-02-2019(online)].pdf 2019-02-05
14 908-MUM-2014-CLAIMS [05-02-2019(online)].pdf 2019-02-05
15 908-MUM-2014-ABSTRACT [05-02-2019(online)].pdf 2019-02-05
16 908-MUM-2014- ORIGINAL UR 6(1A) FORM 26-201218.pdf 2019-04-16
17 908-MUM-2014-US(14)-HearingNotice-(HearingDate-07-09-2020).pdf 2020-07-27
18 908-MUM-2014-FORM-26 [03-09-2020(online)].pdf 2020-09-03
19 908-MUM-2014-Correspondence to notify the Controller [03-09-2020(online)].pdf 2020-09-03
20 908-MUM-2014-Written submissions and relevant documents [22-09-2020(online)].pdf 2020-09-22
21 908-MUM-2014-PatentCertificate08-07-2021.pdf 2021-07-08
22 908-MUM-2014-IntimationOfGrant08-07-2021.pdf 2021-07-08
23 908-MUM-2014-RELEVANT DOCUMENTS [26-09-2022(online)].pdf 2022-09-26
24 908-MUM-2014-RELEVANT DOCUMENTS [30-09-2023(online)].pdf 2023-09-30

Search Strategy

1 search_03-08-2018.pdf

ERegister / Renewals

3rd: 21 Sep 2021

From 20/03/2016 - To 20/03/2017

4th: 21 Sep 2021

From 20/03/2017 - To 20/03/2018

5th: 21 Sep 2021

From 20/03/2018 - To 20/03/2019

6th: 21 Sep 2021

From 20/03/2019 - To 20/03/2020

7th: 21 Sep 2021

From 20/03/2020 - To 20/03/2021

8th: 21 Sep 2021

From 20/03/2021 - To 20/03/2022

9th: 10 Mar 2022

From 20/03/2022 - To 20/03/2023

10th: 03 Mar 2023

From 20/03/2023 - To 20/03/2024

11th: 18 Mar 2024

From 20/03/2024 - To 20/03/2025

12th: 08 Feb 2025

From 20/03/2025 - To 20/03/2026