Abstract: METHOD FOR TRAINING NEURAL NETWORK TO PREDICT COSTMAP ABSTRACT A method for preparing ground truth data for training a neural network to predict a cost map is provided. The method comprises receiving a video feed of a location. The method further comprises extracting a first set of frames from the video feed. The first set of frames defines an input dataset. The method further comprises creating, based on the input dataset, a cost map that defines ground truth data. The method further comprises training the neural network based on an application of the neural network on the input dataset such that the neural network generates a prediction corresponding to the ground truth data. The method further comprises extracting a second set of frames from a second video feed of the location. The method further comprises receiving a prediction corresponding to the cost map based on an application of the neural network on the second set of frames. FIG. 1
DESC:TECHNICAL FIELD
The present disclosure relates to mapping and navigation technologies, and more particularly to a method for training a neural network model to predict a cost map.
BACKGROUND
Advancements in field of mapping and navigation technology has led to the rapid development of various hardware and software tools (for example, sensors, computer vision algorithms, vehicle-to-everything communication systems, and so on). The tools may be employed in robots, autonomous vehicles, drones, and so on, for mapping an environment, navigating through the environment, localizing a position in the environment, communicating with other robots/drones/vehicles and so on. Such hardware and software tools may be leveraged by industries such as manufacturing, defence, logistics, automotive, construction, agriculture, healthcare, and the like. For example, robots and drones may be used in manufacturing and agricultural industries for automating tasks (such as packaging, inspection, or precision farming), improving efficiency, and enhancing capabilities such that profitability of the industries is enhanced, and production costs is minimized. With technological development in the area of autonomous navigation, there is an increased usage of semi-autonomous vehicles or fully autonomous (self-driving) vehicles which may lead to a transformation in transport experience. However, the leveraging of the robots, drones, or autonomous vehicles may necessitate an emphasis on development of a robust navigation system such that the robots, drones, or autonomous vehicles may use the navigation system to navigate along a route in an environment in which unpredictable obstacles may appear.
Typically, cost maps and path planning algorithms facilitate the navigation by planning a route in the environment via which a mobile equipment (for example, a robot) may traverse. The route may include regions in the environment where cost of traversal is likely to be low and the cost maps may indicate regions in the environment that are traversable and regions that are not traversable. However, the cost maps and path planning algorithms may not be sufficient in-itself due to their reliance on static maps that may be generated prior to path/route planning. Further, conventional navigation tools may not be able to update a cost map based on real-time environmental information such as appearance or disappearance of obstacles in a previously planned route.
To address this issue, an obstacle layer may be introduced into the cost map. The obstacle layer may be configured to update the environmental information in real-time. The obstacle layer is used in conjunction with Video Detection and Ranging (ViDAR) to build a multi-camera-based depth estimation model. The building of the depth estimation model may involve capturing of video streams of the environment are and using the environment to create a 2-Dimensional (2D) or 3-Dimensional (3D) scans. Based on the 2D/3D scans, depth information may be computed, and, thereafter, a 3D model of the environment may be generated. Based on the 3D model, a cost map may be generated for enabling the robot to navigate across routes that may be determined based on the cost map. However, the generation of the cost map based on the 3D model may require significant computational power and involve significant latency. This may be due to the computation of the depth information and generation of the 3D model.
Therefore, in light of the foregoing discussion, there exists a need to overcome the aforementioned drawbacks associated with generation of cost maps for facilitating navigation of robots or vehicles.
SUMMARY
The present disclosure provides a method for preparing ground truth data for training a neural network to predict a cost map. The present disclosure provides a solution to the existing problem of path determination to navigate a mobile robot in a location. The solution provides an efficient, reliable and time-efficient method for the path determination, compared to conventional methods for the same. The solution constitutes training a neural network to predict the cost map by preparing ground truth data. The neural network is trained based on the prepared ground truth data. An objective of the present disclosure is to provide a solution that overcomes at least partially the problems encountered in the prior art and provides an improved path determination technique for navigating the mobile robot using a predicted cost map with minimum computational cost and latency.
One or more objectives of the present disclosure is achieved by the solutions provided in the enclosed independent claims. Advantageous implementations of the present disclosure are further defined in the dependent claims.
In one aspect, an embodiment of the present disclosure provides a method for preparing ground truth data for training a neural network to predict a cost map, the method comprising:
- receiving a first video feed of a location;
- extracting a first set of frames from the first video feed, wherein the first set of frames defines an input dataset;
- creating a cost map based on the first set of frames, wherein the cost map defines the ground truth data;
- training the neural network based on an application of the neural network on the input dataset such that the trained neural network generates a prediction corresponding to the ground truth data;
- extracting a second set of frames from a second video feed of the location; and
- receiving a prediction corresponding to the cost map (204) based on an application of the neural network on the second set of frames.
Optionally, creating the cost map comprises:
- generating a 3-Dimensional (3D) model of the location by applying a photogrammetry technique on the first set of frames, and
- creating the cost map from the 3D model.
Optionally, wherein each frame of the plurality of frames is a Red Green Blue (RGB) image.
Optionally, training the neural network comprises identifying a cost function based on created cost map.
Optionally, the identification of the cost map comprises:
- determining a set of variables based on which a cost is assigned to each region of the location for creation of the cost map, wherein each region corresponds to at least one pixel of at least one frame of the first set of frames, and
- determining a relationship between the cost and each variable of the set of variables, wherein the cost function is identified based on the determined relationship.
Optionally, the relationship may be determined based on one of a regression analysis or non-linear function modelling.
BRIEF DESCRIPTION OF THE DRAWINGS
The summary above, as well as the following detailed description of illustrative embodiments, is better understood when read in conjunction with the appended drawings. For the purpose of illustrating the present disclosure, exemplary constructions of the disclosure are shown in the drawings. However, the present disclosure is not limited to specific methods and instrumentalities disclosed herein. Moreover, those in the art will understand that the drawings are not to scale. Wherever possible, like elements have been indicated by identical numbers.
Embodiments of the present disclosure will now be described, by way of example only, with reference to the following diagrams wherein:
FIG. 1 illustrates steps of a method for preparing ground truth data for training a neural network to predict a cost map, in accordance with an embodiment of the present disclosure; and
FIGs. 2A and 2B are the schematic illustrations of input and output data of the neural network, in accordance with an embodiment of the present disclosure.
DETAILED DESCRIPTION OF INVENTION
The following detailed description illustrates embodiments of the present disclosure and ways in which they can be implemented. Although some modes of carrying out the present disclosure have been disclosed, those skilled in the art would recognize that other embodiments for carrying out or practising the present disclosure are also possible.
The present disclosure provides a method for preparing ground truth data for training the neural network. Once the neural network is trained based on the ground truth data, the neural network may predict a cost map for a location based on an application of the trained neural network on a set of frames (such as RGB images) extracted from a video feed of the location. The predicted cost map may indicate a cost associated with each region of the location. The cost may be assigned based on a set of variables associated with a region such that a cost function associated with training the neural network is minimized. Based on the cost map, one or more paths/routes may be determined through which a mobile robot or a vehicle may navigate to traverse from a source location to a destination location. The method is easy to implement, reduces amount of data required to be captured for preparing a cost map, as the cost map is predicted based on a set of frames extracted from the video feed. Further, the method avoids complex data processing that is usually involved in generating a cost map from scanned data and depth information. Such avoidance allows reducing latency and minimizes amount of computation power that may be required for generating the cost map.
Referring now to FIG. 1, illustrated is steps of a method 100 for preparing ground truth data for training a neural network to predict a cost map, in accordance with an embodiment of the present disclosure. At step 102, a first video feed of a location is received. At step 104, a first set of frames is extracted from the first video feed, wherein the first set of frames defines an input dataset. At step 106, a cost map is created based on the plurality of frames, wherein the created cost map defines the ground truth data. At step 108, the neural network is trained based on application of the neural network on the input dataset such that the trained neural network predicts a cost map that corresponds to the ground truth. At 110, a second set of frames is extracted from a second video feed of the location. At 112, a prediction corresponding to a cost map is received based on an application of the neural network on the second set of frames.
The steps 102, 104, 106, 108, 110, and 112 are only illustrative and other alternatives can also be provided where one or more steps are added, one or more steps are removed, or one or more steps are provided in a different sequence without departing from the scope of the claims herein. It will be appreciated that the method is easy to implement, enables in training the neural network to predict the cost map. The trained neural network may be included in a mobile robot or a vehicle.
The mobile robot is an automated machine that may be configured to execute specific tasks (with little or without human intervention) with speed and precision. In particular, the mobile robot may be capable of carrying out a series of actions automatically. The mobile robot comprises plurality of sensors, effectors, actuators, and control systems. In an example, the mobile robot may include, but not limited to, robots used in various sectors such as warehouses and distribution centres, medical and healthcare, hospitality, agriculture and so forth.
The vehicle may be a semi-autonomous or a fully autonomous (i.e., driverless vehicle) that is capable of transporting humans and items from a source location to a destination location. The vehicle may be powered by different types of energy sources such as hydrogen fuel, lithium batteries, electric motors, fossil fuels, and so on. The vehicle may include a plurality of sensors, effectors, actuators, and control systems. In an example, the vehicle may include, but not limited to, four-wheeler vehicles (such as cars, trucks, buses, and so on) or two-wheeler vehicles (such as motorcycles, scooters, bikes, and so on).
The mobile robot or the vehicle may include cameras or video recording devices such as, Digital single-lens reflex (DSLR) cameras, mirrorless cameras, night vision camera, fish-eye camera, 360º camera, and so on. The cameras may include sensors such as charge-coupled device sensors, complementary metal-oxide-semiconductor sensors, charge-injection device sensors, infrared sensors, time-of-flight sensors, hybrid sensors, and so on. The image sensor may be mounted on the mobile robot or the vehicle for suitably capturing the first video feed. The sensors may be configured to capture the first video feed of the location. Notably, the captured first video feed may include a plurality of frames. By the way of an example, for a camera or a video recording device capturing a video feed at a frame rate of at 30 frames per second (fps), 30 distinct frames (images) of the location may be captured in succession within one second. Each region of the location may be depicted in a portion of at least two frames of the plurality of frames. For example, a region that includes a building and a portion of a road, may be captured from multiple viewpoints. This may result in the generation of as many frames as the number of viewpoints from which the frames are captured. Thus, the plurality of frames may extensively cover all regions of the location.
At 102, the first video feed, captured by the sensors, may be received by a processor included in the mobile robot or the vehicle. The processor may be a computational element that is operable to processes the first video feed to extract the first set of frames from the plurality of frames. The processor may be a microprocessor, a microcontroller, a complex instruction set computing (CISC) microprocessor, a reduced instruction set (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, or any other type of processing circuit. In the present disclosure, the processor may include one or more individual processors, processing devices and elements arranged in various architectures for responding to and processing the instructions to receive the first video feed, extract the first set of frames from the captured plurality of frames (i.e., the first video feed), create a cost map (i.e., prepare ground truth data), train the neural network based on the created cost map, receive a second video feed, apply the trained neural network on a second set of frames that may be extracted from the second video feed, and receive a predicted cost map as an output of the trained neural network.
As mentioned herein above, the first video feed is associated with the location. The term “location” used herein refers to a scene that is visible or can be captured by the sensors included in the mobile robot or the vehicle. In the present disclosure, the location concerns essentially to a traversable area in the location for the mobile robot or the vehicle. For example, the location may typically include a motorable road, a sidewalk or a route along with other things (or obstacles) such as moving or fixed objects in the location.
At 104, the processor is configured to extract the first set of frames from the first video feed. According to an embodiment, the processor may employ frame extraction methods to extract the first set of frames from the first video feed. In an example, the methods used for frame extraction may include, but may not be limited to, Time-based frame extraction, Keyframe extraction, Object tracking, Frame interpolation and the like. The first set of frames may define an input dataset. The input dataset may be a collection frames or images (i.e., the input dataset) that has been extracted from the first video feed. The frames of the input dataset may be overlapping, i.e., a region (that corresponds to a portion of the location) may be represented by pixels of at least two frames of the first set of frames (i.e., the input dataset). The input dataset may be stored for further analysis or processing.
In an embodiment, each frame of first set of frames of the input dataset may be a Red-Green-Blue (RGB) image of the location. The RGB image may a digital image that represents the colours and features of a region in the location using a combination of red, green, and blue channels. In other words, each RGB image may represent a datapoint of the input dataset, and accordingly all RGB images (i.e., the first set of frames) may constitute the input dataset. The RGB image includes red, green, and blue colours value in various proportions to obtain any colour in the visible spectrum. From data processing perspective, the RGB image may be expressed as pixels plotted in a 2D map. For example, the each frame in the input dataset may be represented as M*N*3 array where each 3-vector corresponds to the red, green, and blue intensities of each pixel. In which, M is number of rows of the matrix and N is the number of columns of the matrix, and 3 is the size of the vector which represents the RGB value.
At 106, the processor is configured to create a cost map based on the first set of frames. The created cost map may define the ground truth data, based on which the neural network may be trained. Typically, a cost map may be a representation of the location or environment that indicates regions (or areas) of the location that may be traversable and areas that may not be traversable. The areas may be represented by grids. The cost map may be used by the mobile robot or the vehicle to plan navigation routes from a source region of the location to a destination region of the location while avoiding obstacles or other potential hazards. Typically, a cost map may indicate a cost (difficulty) of traversing the different areas of the location (represented by grids). The areas of the location (grids) where the ground is flat (without elevations) or smooth may be represented using lower costs and areas of the location (grids) where the ground is rough or include elevations may be represented using higher costs. From data processing aspect, the cost map may be a grid map (a 2D map) where each cell (grid) may be assigned a specific value or cost that indicates whether an area represented by the corresponding cell is traversable. Higher costs indicate a smaller distance between the mobile robot/vehicle and an obstacle. Further, from the cost map, traversable areas of the location may be determined to find a shortest path from a source location to a destination location while avoiding obstacles. In an embodiment, the cost map may be modified based on reception of updated video feeds of the location. The updated video feeds may include frames that may be same, similar, or different from frames of a previous video feed of the location, even if the frames of the updated video feed and the precious video feed represent the same region of the location. The modified cost maps may be created based on the frames of the updated video feed.
According to an embodiment, creating the cost map comprises generating a 3D model of the location by applying a photogrammetry technique on the first set of frames (i.e., the RGB images). The application of the photogrammetry may result in generation of a 3D point cloud of the location, i.e., a geometrical representation of the location. From the 3D point cloud, a mesh may be generated. The mesh may be a representation of the surface of the 3D model. Based on the colour and other characteristics obtained from the RGB images (i.e., the input dataset or the first set of frames), texture may be applied on the mesh for generation of a realistic 3D model. The generation of the 3D model of the location using the photogrammetry may involve precise measurements and computations of size, shape, and position of photographic features (present in the location) and/or obtaining other information such as relative locations (coordinates), areas, volumes, and other properties of such photographic features from the RGB images (i.e., the input dataset or the first set of frames). The precise measurements and computations may enable recognition and identification of the photographic features from the RGB images.
Thereafter, the cost map may be created from the 3D model. The creation of the cost map may involve voxelization of the 3D model for obtaining a volumetric representation. Thereafter, costs may be assigned to different regions or elements in within the 3D model. The cost may be assigned by a cost assignment function. For example, higher cost may be assigned to regions where obstacles are situated and regions where obstacles are likely to appear in the future. Further, assigned costs may be higher for regions where there is a likelihood of increased energy consumption or delay when the mobile robot or the vehicle traverses through those regions. The likelihood may be determined based on smoothness, roughness, elevations, and other factors that may be associated with the region. On the other hand, a lower cost may be assigned to regions such as open spaces and roads. Thus, assigned costs enable determination of regions that may be traversable and regions that may not be traversable for the mobile robot and the vehicle.
Thus, the cost assignment function assists in determining a traversable area for a mobile robot or a vehicle in the location. Based on the assigned costs, a route planning algorithm (such as A*/D*) may determine efficient and safe routes for the mobile robot or the vehicle to traverse across the location. Once the cost is assigned, the volumetric representation (i.e., the voxelated 3D model) may converted into a 2D raster map for the creation of the cost map. The 2D raster map may include grids of pixels. The grids representing lower cost regions may be indicated based on a first colour (for example, white) and grids representing higher cost regions may be indicated based on a second colour (for example, black). The created cost map may define the ground truth data.
In some embodiments, the cost map may be created from a High-Definition (HD) map. The HD map may be generated based on an application of photogrammetry on the input dataset (i.e., the first set of frames extracted from the first video feed). Typically, the generation of the HD map may include identification and labelling of specific features, such as roads, intersections, or traffic signals. The generation of the HD map may further include collection of additional data associated with the environment, such as the location of curb heights, lane widths, or other details. Once the relevant information is extracted, the HD map of the location may be generated. The HD map may include details associated with all the photographic features present in the images of the location.
Thereafter, the cost map is created from the HD map. To create a cost map from the HD map, relevant information associated with each region of the location is determined from the HD map and encoded into a 2D grid map (i.e., the cost map). The relevant information associated with each region may indicate whether the mobile robot or the vehicle may be able to traverse through a corresponding region of the location. The determination and encoding may involve identifying specific features or characteristics of the environment, such as the presence of obstacles or hazards, and assigning a cost or difficulty value to each grid based on the identified features. The created cost map may define the ground truth data.
At 108, the processor may be further configured to train the neural network based on an application of the neural network on the input dataset such that the trained neural network generates a prediction corresponding to the ground truth data (i.e., the created cost map). In an embodiment, the first set of frames and the created cost map may be paired to form the ground truth data. In other words, the ground truth data is formed by pairing or labelling the RGB images associated with the first set of frames (i.e., the input dataset) with created cost map. After completion of the pairing, the neural network may be applied on the input dataset. Based on the application, the neural network may generate an output. During initial iterations of the training phase, the output may significantly differ from the created cost map (i.e., the ground truth data). Based on the determined difference, weights, biases, and hyperparameters associated with the neural network may be updated such that at the completion of the training, the trained neural network generates a prediction that corresponds to the ground truth data (i.e., the created cost map).
In an embodiment, the neural network may be trained based on training data and test data. The size of the training data may be greater compared to that of the test data. For example, a ratio of the training data and the test data may be 3:2, 7:3, 4:1, and so on, based on a specific application or task for which the neural network may be used. It will be evident to a person skilled in the art that the training data and the test data would be used to fit the neural network and evaluate the performance and accuracy of the neural network respectively. Therefore, at inference stage, the trained neural network may be employed on a mobile robot or a vehicle. A set of RGB images of a location may be provided as an input to the trained neural network such that the trained neural network predicts a cost map as output. Based on the predicted cost map the mobile robot or the vehicle may determine traversable areas in the location.
It may be evident to a person skilled in the art that the neural network is a network of artificial neurons programmed in software that simulates a human brain in terms of performing tasks such as processing images, videos, audio, texts, and so forth, and determining meanings from the images, videos, audio, and text. Typically, a neural network comprises a plurality of layers of nodes, viz., an input layer, one or more intermediate hidden layers, and an output layer, interconnected, for example in a feed-forward manner (i.e., flow in one direction only, from input to output). In the present disclosure, the neural network is trained to predict a cost map.
According to an embodiment, training the neural network comprises identifying a cost function based on the created cost map. The cost function may be determined further based on the output of the neural network, generated at each iteration of application of the neural network on the input dataset. The cost function may be indicative of a difference between the created cost map (ground truth) and the output at each iteration. The difference may be required to be minimized for training the neural network such that the neural network generates an output (prediction) that matches the ground truth (the created cost map).
In an embodiment, the cost function may be minimized based on determination of a set of variables. The variables may be associated with each region of the location represented by each frame of the first set of frames (i.e., the input dataset). Based on the variables, the cost assignment function may assign costs to the regions for the creation of the cost map (i.e., the ground truth). The variables may include coordinates of the regions, presence of obstacles in the regions, degree of elevation in the regions, degree of smoothness or roughness in the regions, and so on. Once the variables are determined, a relationship between the cost assigned to each region and each variable of the set of variables may be determined. Based on the variables and the relationship between the assigned cost and the set of variables, the cost function may be identified, and the weights and the biases of the nodes of the neural network may be adjusted such that the neural network generates output (prediction) that matches the ground truth (the created cost map).
In an embodiment, the relationship between the assigned cost and the set of variables may be determined based on a regression analysis (if the relationship is linear) or non-linear function modelling (if the relationship is non-linear). The based on the determined relationship.
At 110, the processor may be further configured to extract a second set of frames from a second video feed of the location. The second video feed of the location may be captured by the sensors of the mobile robot or the vehicle. The processor may employ frame extraction methods to extract the second set of frames from the second video feed. The frames of the second set of frames may be overlapping. Each region of the location may be represented by pixels of at least two frames of the second set of frames. Each frame of second set of frames of the input dataset may be a RGB image of the location.
At 112, the processor may be further configured to receive a prediction that may correspond to a cost map. The prediction may be received based on an application of the trained neural network on the second set of frames. The prediction may be received as an output of the trained neural network. Thus, during the inference stage, a cost map may be obtained based on an input set of frames of the location. Thus, the computational cost and latency involved in generating a cost map may be significantly reduced or minimized by employing the trained neural network in the mobile robot or the vehicle.
Referring now to FIGs. 2A and 2B, illustrated are the schematic illustrations of input and output data of the neural network, in accordance with an embodiment of the present disclosure. FIG. 2A depicts an exemplary RGB image 202 of a location. The neural network may be applied on an input data (i.e., the RGB image 202). The RGB image 202 may be extracted from a video feed of the location. Further, FIG. 2B depicts a cost map 204. The cost map 204 may be predicted as an output by the neural network. It would be evident to a person skilled in the art that once the neural network is trained using the ground truth data (i.e., the RGB images (i.e., the input dataset or the first set of frames) and corresponding created cost maps) prepared via the method 100 of FIG. 1, then the RGB image 202 of FIG. 2A can be provided as the input to the trained neural network to predict the cost map 204 of FIG. 2B. Further, based on the cost map of FIG. 2B, a cost assignment function can be determined, which can assist in determining a traversable area for a mobile robot or a vehicle in the location.
In an embodiment, the present disclosure relates a computing device having the processor, wherein the processor is configured to perform the steps of the method explained in conjunction with FIG. 1.
Additionally, the present disclosure relates a mobile robot or a vehicle having aforementioned computing device.
Modifications to embodiments of the present disclosure described in the foregoing are possible without departing from the scope of the present disclosure as defined by the accompanying claims. Expressions such as "including", "comprising", "incorporating", "have", "is" used to describe and claim the present disclosure are intended to be construed in a non-exclusive manner, namely allowing for items, components or elements not explicitly described also to be present. Reference to the singular is also to be construed to relate to the plural. The word "exemplary" is used herein to mean "serving as an example, instance or illustration". Any embodiment described as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments or to exclude the incorporation of features from other embodiments. The word "optionally" is used herein to mean "is provided in some embodiments and not provided in other embodiments". It is appreciated that certain features of the present disclosure, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the invention, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable combination or as suitable in any other described embodiment of the disclosure.
,CLAIMS:CLAIMS
I/We claim:
1. A method (100) for preparing ground truth data for training a neural network to predict a cost map (204), the method comprising:
- receiving (102) a first video feed of a location;
- extracting (104) a first set of frames from the first video feed, wherein the first set of frames defines an input dataset;
- creating (106) a cost map based on the first set of frames, wherein the created cost map defines the ground truth data;
- training (108) the neural network based on an application of the neural network on the input dataset such that the trained neural network generates a prediction corresponding to the ground truth data;
- extracting (110) a second set of frames from a second video feed of the location; and
- receiving (112) a prediction corresponding to the cost map (204) based on an application of the neural network on the second set of frames.
2. The method (100) according to claim 1, wherein creating the cost map comprises
- generating a 3-Dimensional (3D) model of the location by applying a photogrammetry technique on the first set of frames, and
- creating the cost map from the 3D model.
3. The method (100) according to claim 1, wherein each frame of the plurality of frames is a Red-Green-Blue (RGB) image.
4. The method (100) according to claim 1, wherein training the neural network comprises identifying a cost function based on the created cost map.
5. The method (100) according to claim 4, wherein the identification of the cost function comprises:
- determining a set of variables based on which a cost is assigned to each region of the location for creation of the cost map, wherein each region corresponds to at least one pixel of at least one frame of the first set of frames; and
- determining a relationship between the cost and each variable of the set of variables, wherein the cost function is identified based on the determined relationship.
6. The method (100) according to claim 5, wherein the relationship may be determined based on one of a regression analysis or non-linear function modelling.
| # | Name | Date |
|---|---|---|
| 1 | 202241076747-STATEMENT OF UNDERTAKING (FORM 3) [29-12-2022(online)].pdf | 2022-12-29 |
| 2 | 202241076747-PROVISIONAL SPECIFICATION [29-12-2022(online)].pdf | 2022-12-29 |
| 3 | 202241076747-POWER OF AUTHORITY [29-12-2022(online)].pdf | 2022-12-29 |
| 4 | 202241076747-FORM FOR STARTUP [29-12-2022(online)].pdf | 2022-12-29 |
| 5 | 202241076747-FORM FOR SMALL ENTITY(FORM-28) [29-12-2022(online)].pdf | 2022-12-29 |
| 6 | 202241076747-FORM 1 [29-12-2022(online)].pdf | 2022-12-29 |
| 7 | 202241076747-FIGURE OF ABSTRACT [29-12-2022(online)].pdf | 2022-12-29 |
| 8 | 202241076747-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [29-12-2022(online)].pdf | 2022-12-29 |
| 9 | 202241076747-EVIDENCE FOR REGISTRATION UNDER SSI [29-12-2022(online)].pdf | 2022-12-29 |
| 10 | 202241076747-DRAWINGS [29-12-2022(online)].pdf | 2022-12-29 |
| 11 | 202241076747-DECLARATION OF INVENTORSHIP (FORM 5) [29-12-2022(online)].pdf | 2022-12-29 |
| 12 | 202241076747-Proof of Right [17-01-2023(online)].pdf | 2023-01-17 |
| 13 | 202241076747-Correspondence_Form1_Form26,Form28_03-04-2023.pdf | 2023-04-03 |
| 14 | 202241076747-DRAWING [27-12-2023(online)].pdf | 2023-12-27 |
| 15 | 202241076747-CORRESPONDENCE-OTHERS [27-12-2023(online)].pdf | 2023-12-27 |
| 16 | 202241076747-COMPLETE SPECIFICATION [27-12-2023(online)].pdf | 2023-12-27 |