Abstract: An interactive three dimensional (3D) virtual object visualization system comprising of a model processing module configured to import and validate atleast one 3D model from a model source, wherein the model processing module further comprises, a mesh processing module configured to divide the atleast one 3D model into a sequence of meshes, a texture processing module configured to divide the atleast one 3D models in a sequence of textures, a processor to request corresponding mesh and texture sequences and to maintain synchronizations between the said mesh and texture sequences, a rendering engine that is configured to render the decoded mesh on a display device and a controller that is configured to detect one or more user actions and control the movement of a plurality of 3D virtual objects in accordance with the one or more user actions. Ref.: Fig. 1
DESC:Field of the Invention
The present invention relates to a system and method for processing and visualization of Three Dimensional (3D) digital content for enhancing user experience in an holographic, Augmented Reality or a Virtual Reality setting.
Background of the Invention
A dynamic 3D object is defined as a three-dimensional virtual object that dynamically changes its shape through deformations or relative motions of its parts. The definition of dynamic can be extended to not only include geometrical changes but also changes in appearance, color, texture, and other attributes of the object. Examples of a dynamic 3D object are a virtual person walking or a virtual heart beating. A static 3D key-model is herein defined as a model of a dynamic 3D object in a fixed pose with a particular set of attributes.
The processing and presenting of high quality content and Augmented Reality (hereinafter ‘AR’) and Virtual Reality (hereinafter ‘VR’) as well as mixed reality content are expensive and aplenty with challenges. Automated platforms known in the state of the art, are unable to provide a quality immersive experience within the same platform. Platforms known in the state of the art, take a 3D model which consists of a sequence of encoded meshes and textures. Compressing, decompressing, transmitting and rendering of this model interactively within a digital environment fails to deliver an expected user experience and generally, the user has to separatelyrely on expensive AR and VR headsets or hologram for a truly immersive experience. There is need for one single platform which is cost effective and is able to integrate all of these experiences.
Three Dimensional (3D) modeling sources create 3D models having meshes (or curves) and textures. Sources creating 3D models could be manual (by a human) or automated (such as an automated 3D scanning machine), such models are processed in a machine readable format. Generally, 3D models to be imported into a platform are of different filetypes and fileextensions and are therefore required to be processed and scaled to size and standardized, based on the type of rendering, prior to rendering in order to provide a uniform user experience. Furthermore, meshes and textures within a 3D model are required to be analysed and aligned for the model to be properly aligned, in the absence of which, the model will be rendered incorrectly.
Another problem faced by automated platforms (as well as displays enabled by AR and VR headsets) during rendering rendering and visualization of dynamic 3D object is of scale. For example, for a current 3D model being rendered, there is a certain level of magnification done using the scroll functionality.User actions on the rendered model such as scrolling are limited to the dimensions of the screen of the display device. When the user changes the 3D model, he will expect that the scroll functionality must have the same range of magnification available, as the previous 3D model. However, there is a limitation of how scroll works. The scroll from the controller will use the dimensions of the screen as the limit. By this analogy, if scroll position was taken up to 70% of the screen width for the current 3D model and then the 3D model was changed, the next 3D model would have 100-70% = 30 % of scroll space left for magnification. This would drastically limit the controller range of motion available for magnifying the next 3D model.
Accordingly, there is the need of an interactive and integrated 3D virtual object visualization system which can provide enhanced user experience on any display device.
Summary of the Invention
To overcome the problems associated with the present state of the art, the present disclosure describes systems and methods for visualization of 3D content.
It is an object of the present invention to provide an interactive 3D virtual object visualization systemcomprising of: a model processing module configured to identify and download 3D models from a model source into a local storage buffer, the model processing module comprises of a mesh processing module and a texture processing module. The mesh processing module divides the 3D model into a sequence of meshes and processes the meshes for color and position and further includes a mesh decoder configured to decode compressed mesh sequences into frames of meshes in real-time and a mesh decoder manager configured to request compressed mesh sequences from the mesh storage and to request the mesh decoder to decode the compressed mesh sequences. The system further comprises of a texture processing module configured to decode texture and to request corresponding decoded mesh from the mesh decoder manager and to maintain synchronizations between requesting the corresponding decoded mesh and decoding the texture, a rendering engine that is configured to render the decoded mesh on a display device and a controller that is configured to detect one or more user actions and control the movement of the rendered 3D model in accordance with the one or more user actions.
It is another object of this invention to provide for a method for visualization of a plurality of interactive three dimensional (3D) virtual objects, comprising the steps of: using a model source to export a plurality of static 3D models in model files describing atleast one dynamic 3D object; using a model processing module to import said model files and extract the plurality of static 3D models, decoding using a mesh decoder one or more compressed mesh sequences from the said static 3D models, said mesh sequences having vertex indices, decoding texture sequences using a texture processing module from the said static 3D models and generating atleast a texture sequence, said texture sequence including color and other attributes, synchronizing the decoded mesh and texture sequences, rendering the decoded mesh and texture sequences inside a digital environment on a display device and controlling movement of the said rendered 3D model using a controller in accordance with user actions.
It is a further object of this invention to provide for model processing module in the interactive 3D virtual object visualization system to be configured to validate filetype and file extension of the imported 3D model.
It is a futher object of this invention for the mesh decoder in the interactive 3D virtual object visualization system to be configured to decode the color and position of each mesh.
It is a further object of this invention to provide for the controller in the interactive 3D virtual object visualization system to be configured to store scaled values of the rendered model, pre-determined by way of scrolling function.
It is another object of this invention for the controller interactive 3D virtual object visualization system to be communicably coupled to a hardware camera to capture the one or more user actions and provides the one or more user actions as feedback to the renderer.
Brief Description of the Figures
Figure 1 is a block diagram illustrating an example 3D virtual object visualization system in accordance with some implementations of the present invention.
Figure 2 is a block diagram illustrating the 3D Model processing module in accordance with some implementations of the present invention.
Figure 3 is a block diagram illustrating the 3D model rendering engine in accordance with some implementations of the present invention.
Figure 4 is a block diagram illustrating the Display App Rendering Module in accordance with some implementations of the present invention.
Figure 5 is a block diagram illustrating a model source in accordance with some implementations of the present invention.
Figure 6 is a flow chart illustrating a method for interactive 3D virtual object rendering and visualization in a digital environment in accordance with some implementations.
Detailed Description
Holographic, VR, AR or mixed reality technologies may be applied on portable devices 102 (see Figure 1). For example, a user can activate a smart phone's camera, and if the camera captures a table, the user can graphically place a character as a composite on the table. The composite image appears that the cartoon character exists in the real world the same way the table doessuch that it is possible for generation of AR content having dynamic content within the composite.
In the present disclosure, provided are technologies that capture and render a 3D model to enable the creation of 3D dynamic mixed reality content as well as technologies for displaying the mixed reality content and for enabling a user to interact with the displayed content.
Figure 1 is a block diagram illustrating an example 3D virtual object visualization system 100 in accordance with some embodiments. In an exemplary embodiment of the present invention, the system 100 includes one or more user devices 102 (e.g., devices 102A, 102B, 102C etc.), a model source 103 communicably coupled over a communication network 104. The communication network 104 interconnects one or more devices 102 with each other, and optionally includes the Internet, one or more local area networks (LANs), one or more wide area networks (WANs), other types of networks, or a combination of such networks.
In an exemplary embodiment of the present invention, a user device 102 renders 3D models received from the model source 103. A user device 102 may be a mobile device, e.g., a smart phone, a laptop computer, a tablet computer, or an audio/video player. The user device comprises of a 3D Model Processing Module (200). The model processing module is coupled to a 3D Model Rendering Engine (300) that is connected to a Display Module (400). The display module is connected to an Experience Delivery Device (500), set to enhance the user experience, which is configured optionally to a Control Module (600) to perform user actions on it.
As known to a person skilled in the art, tools exist for creation of 3D models. Such tools can automated tools which include software applications that help create objects with complex attributes and/or manual tools which include 3D scanners and other devices. Static 3D models are created using such tools and kept in the model source 103 as model files. In an embodiment of the present invention, the 3D Model processing module (200) helps import the model files into the system and extract the static model files.
In a preferred embodiment of the present invention, the 3D Model Processing Module (200) is configured to verify the 3D model file for filetype and file extension before being provided to the model processing module. The model processing module is responsible for processing the meshes and textures, scaling the 3D model and standardizing it before being sent to the 3D rendering module.
In another embodiment of the present invention, the model source system 103 may process the 3D model by compressing and decompressing the data, as well as analyzing and modifying the data for reconstruction at the remote end.Figure 2 is a block diagram illustrating the 3D processing module 200 within an interactive 3D virtual object visualization system in accordance with an exemplary embodiment of the present invention. The interactive 3D virtual object visualization system can effectively render 3D textured mesh objects in a digital environment in accordance with user interactions. In a preferred embodiment, the interactive 3D virtual object visualization system is configured to load 3D models having encoded meshes and corresponding texture content asynchronously, decodes meshes frame by frame into coordinates such as vertices, triangle indices and texture coordinates, decodes texture sequences using known hardware decoders, known in the art and renders the decoded mesh and texture while maintaining synchronization. In another embodiment of the present invention, the interactive 3D virtual object visualization system is able to respond to user actions such as scrolling, mouse dragging, finger pinch etc., to move, zoom or rotate the 3D virtual object.
In another exemplary embodiment of the present invention, the 3D processing module 200 includes a mesh storage module 204 and a mesh processing module 205. The mesh processing module is configured to divide the 3D model into a sequence of meshes and comprises internally of a mesh decoder 210 necessary to decode compressed mesh sequences into frames of meshes by, for instance, performing a series of computations. In other embodiments of the present invention, the 3D mesh processing module 205 includes a mesh downloading process by managing downloads of compressed mesh sequences from the model source 103 and stores the downloaded mesh sequences in a mesh storage module 204, where the mesh sequence await operations such as decompression and rendering. The mesh loading process may load multiple mesh sequences concurrently, asynchronously, and with or without different priorities. In another embodiment of the present invention, the mesh processing module 205 comprises of a mesh decoder manager 211, configured to retrieve decoded mesh data, from a mesh storage module/buffer 204, and provide the decoded mesh data to the mesh-texture processor 203.
In another embodiment of the present invention, the 3D processing module comprises of a texture processing unit comprised of a texture processing module and a texture storage module/buffer. Similar to the mesh processing module, the texture processing module processes the texture for colors (or any other textures) and enables texture reconstruction prior to sending it to the processor 203.
In another exemplary embodiment of the present invention, the mesh-texture processor 203 is configured to maintain the synchronization between the mesh and texture and requests decoded mesh and texture data from mesh and texture units respectively to be sent to the rendering engine 208.
In an exemplary embodiment of the present invention, the rendering engine communication module 208 is inputted with decoded mesh and texture coordinates, to render the decoded mesh on a platform which can include a digital environment using a rendering engine 300 as will be described below.
In an exemplary embodiment of the present invention, a controller 600 may also be used to control the movement of the 3D virtual object, detect user actions, e.g., a mouse dragging, a mouse scrolling, a finger movement, a finger pinching (on a desktop computer or a mobile device), and setting parameters of a communicably coupled hardware camera from the renderer, enabling user interaction with a rendered 3D virtual object.
In an exemplary preferred embodiment of the present invention, a scroll functionality may be embodied within controller 600 in the interactive textured mesh 3D virtual object rendering device. The controller may be communicably coupled with a physical scroll button, roller or any equivalent hardware thereof . It is capable of translating the scroll movement in the manner of a scaling function, commonly known in the art, to be transmitted to a display device. The controller is configured to store the magnification value and the display device sends the magnification value to the controller. When the next 3D model is loaded, the controller sends the magnification value to the display device. The display device then magnifies the 3D model to the same value as the previous model. This gives the user consistency in scroll and magnification and ensures a seamless content delivery experience. The display device is capable of then scaling the 3D model. In parallel, the controller 600 stores the magnification value which is accesible, as and when the model changes. This enables the user to have the same range of motion for the current 3D model as compared to the previous 3D model.
In a preferred embodiment of the present invention, the display device can be a AR or VR headset or a hologram.
In a preferred embodiment of the present invention, the display device is also configured to store the magnification value of the rendered object.
Figure 3 illustrates a block diagram of a 3D Model Rendering Engine 300 within an interactive 3D virtual object visualization system in accordance with an exemplary embodiment of the present invention. The 3D Model rendering engine 300 comprises of a 3D Model Storage module 310, a Material module 320, a Shader module 330 communicably coupled to a Processor module 340 and an App Rendering Communication Module 350.
In an embodiment of the present invention, attributes generating modules are used for reconstruction and generation the 3D Model, the said including but not limited to, a Vertex Positions module 311 which describes the positions of indices of mesh vertices in 3D space, a Vertex Colours module 312 which describes the visible colour of vertices, a Vertex direction module 313 which describes the direction in which the vertices of the mesh point lie and a Texture Mapping module 314 which describes the nature of the surface texture to be mapped onto the skin of the model. It is well known to a person skilled in the art of the attributes necessary and the techniques used for extraction thereof for rendition of 3D content.
The Material module 320 includes a Textures module 321 which stores textures (surface skins) to be used for generating the 3D Model and a Shader Property Values module 322 for controlling control the shading or intensity of the texture. The Shader 330 describes the type of shading to be used on the 3D Model. This could be surface shading, light source based shading, image based shading or computed shading.
The mesh and texture data is transmitted to an App Rendering Module via the App Rendering Communication Module 350, to render the decoded mesh on a platform which can include a digital environment. In a preferred embodiment, the processed 3D model data is translated into a machine-readable file with the instructions for generating a 3D Model.
Figure 4 illustrates a block diagram for the Display App Rendering Module 400 within an interactive 3D virtual object visualization system in accordance with an exemplary embodiment of the present invention. This module comprises of a Static Components Storage Module 401, a Byte Code Compilation Module 402, a Display App Packaging Module 403 coupled to a Display App Module 404. The Display App Module in turn comprises of an Activity Module 405 further comprising of a Motion Mapping Module 451 and a Display Interface Mapping Module 452. The Services Module 406 comprises of a Rendering Module (461) and a Model Storage Module 462.
The Display App Packing Module 403 takes in the 3D model from the 3D Model Rendering Module 300 along with static components needed for runtime execution from the Static Components Storage Module 401 and machine-readable compiled form of the instructions from the Byte Code Compilation Module 402. Within the Display App Module, the Activity Module 405 controls the user interfacing actions and experience and the Services Module 406 manages all long running services needed by the Activity Module 405.
A Motion Mapping Module 451 converts motion of the experience delivery device into motion around the 3D model. The Display Interface Mapping module 452 maps the rendering of the model and its motion onto the display interface of the experience delivery device. The Rendering Module 461 provides rendering signals to the Display Interface Mapping Module 452 based on the rendering instructions of the 3D model. The Model Storage Module 462 holds the pre-processed 3D model descriptor file, which is furthered to the Experience Delivery Service Module 500.
Figure 5 is a block diagram illustrating a model source 103 for creating 3D objects in accordance with an exemplary embodiment of the present invention. Such sources can automated tools which include software applications that help create objects with complex attributes and/or manual tools which include 3D scanners and other devices.
In an preferred embodiment of the present invention, the model source includes one or more processing units CPU(s) 502, one or more network interfaces 504, a memory 506, and one or more communication buses 508 for interconnecting these components. The communication buses 508 optionally include chipsets that interconnects and controls communications between system components. The memory 506 typically includes high-speed random access memory, such as DRAJVI, SRAM, DDR RAM or other random access solid state memory devices; and optionally includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices. The memory 506 optionally includes one or more storage devices remotely located from the CPU(s) 502. The memory 506, or alternatively the non- volatile memory device(s) within the memory 506, comprises a non-transitory computer readable storage medium. In some implementations, the memory 506 or alternatively the non-transitory computer readable storage medium stores the following programs, modules and data structures, or a subset thereof:
* an operating system 510, which includes procedures for handling various basic system services and for performing hardware dependent tasks;
* a network communication module (or instructions) 512 for connecting the model source 103 with other devices (e.g., any of the user devices 102A ... 102D) via one or more network interfaces 504 (wired or wireless) or the communication network 104 (Figure 1); and
* data 514 which may include one or more 3D models or portions thereof.
In other embodiments of the present invention, one or more of the above identified elements are stored in one or more of the previously mentioned memory devices, and correspond to a set of instructions for performing a function described above. The above identified modules or programs (e.g., sets of instructions) need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules may be combined or otherwise re-arranged in various implementations. In some implementations, the memory 506 optionally stores a subset of the modules and data structures identified above. Furthermore, the memory 506 may store additional modules and data structures not described above.
Figure 6 is a flow chart illustrating a method 601 for method for visualization of a plurality of interactive three dimensional (3D) virtual objects in accordance with some implementations. As noted above, the method 601 may be implemented at a user device, such as a mobile device or a desktop computer.
In an exemplary embodiment of the present invention, the method 601 includes exporting 3D models from a model source 602 and using a model processing module to import said model files and extract the plurality of static 3D models at 604, using a mesh processing module having a mesh decoder decoding one or more compressed mesh sequences from the said static 3D models, said mesh sequences having vertex indices, using texture processing module for decoding texture sequences using a texture processing module from the said static 3D models and generating atleast a texture map file, said texture sequence including color and other attributes 608 and create the texture sequence 610.
The method further includes synchronizing 612 the mesh and texture sequences for transmission to the rendering engine; rendering 614, using a rendering engine in a digital environment; and detecting 616, using a controller, one or more user actions and react accordingly.
The foregoing description included example systems, methods, techniques, instruction sequences, and computing machine program products that embody illustrative implementations. For purposes of explanation, numerous specific details were set forth in order to provide an understanding of various implementations of the inventive subject matter. It will be evident, however, to those skilled in the art that implementations of the inventive subject matter may be practiced without these specific details. In general, well-known instruction instances, protocols, structures and techniques have not been shown in detail.
The foregoing description, for purpose of explanation, has been described with reference to specific implementations. However, the illustrative discussions above are not intended to be exhaustive or to limit the implementations to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The implementations were chosen and described in order to best explain the principles and their practical applications, to thereby enable others skilled in the art to best utilize the implementations and various implementations with various modifications as are suited to the particular use contemplated.
,CLAIMS:We claim:
1. An interactive three dimensional (3D) virtual object visualization system comprising of:
a model processing module configured to import and validate atleast one 3D model from a model source, wherein the model processing module further comprises,
a mesh processing module configured to divide the atleast one 3D model into a sequence of meshes;
a texture processing module configured to divide the atleast one 3D models in a sequence of textures;
a processor to request corresponding mesh and texture sequences and to maintain synchronizations between the said mesh and texture sequences;
a rendering engine that is configured to render the decoded mesh on a display device; and
a controller that is configured to detect one or more user actions and control the movement of a plurality of 3D virtual objects in accordance with the one or more user actions.
2. The interactive 3D virtual object visualization system as claimed in claim 1, wherein the mesh processing module comprises of a mesh decoder configured to decode compressed mesh sequences into frames of meshes in real-time.
3. The interactive 3D virtual object visualization system as claimed in claim 1, wherein the mesh processing module comprises of a mesh decoder manager configured to request compressed mesh sequences from a mesh storage module and to request the mesh decoder to decode the compressed mesh sequences.
4. The interactive 3D virtual object visualization system as claimed in claim 1, wherein the model processing module is configured to validate filetype and file extension of the imported 3D model.
5. The interactive 3D virtual object visualization system as claimed in claim 2, wherein the mesh decoder is configured to decode the color and position of each mesh.
6. The interactive 3D virtual object visualization system as claimed in claim 1, wherein the controller is configured to store magnification values of the rendered model.
7. The interactive 3D virtual object visualization system as claimed in claim 1, wherein the display device is configured to store magnification values of the rendered model.
8. The interactive 3D virtual object visualization system as claimed in claim 1, wherein the controller uses a hardware camera to capture the one or more user actions and provides the one or more user actions as feedback to the renderer.
9. The interactive 3D virtual object visualization system as claimed in claim 1, wherein the configured to display device is any of a hologram, Augmented Reality (AR) headset or a Virtual Reality (VR) headset.
10. A method for visualization of a plurality of interactive three dimensional (3D) virtual objects, comprising the steps of:
using a model source to export atleast one 3D model in model files describing atleast one dynamic 3D object;
importing said model files and extracting the atleast one 3D model using a model processing module;
dividing the 3D models into a sequence of meshes using a mesh processing module;
decoding texture sequences from the atleast one 3D model and generating atleast a texture map file, said texture sequence including color and other attributes using a texture processing module;
synchronizing the decoded mesh and texture sequences using a processor;
rendering the decoded mesh and texture sequences inside a digital environment on a display device; and
controlling movement of the said rendered 3D virtual object using a controller in accordance with user actions.
11. The method for visualization of a plurality of interactive 3D virtual objects as claimed in claim 10, wherien the step of downloading 3D models from a model source includes the step of validating the filetype and file extension of the imported 3D model.
12. The method for visualization of a plurality of interactive 3D virtual objects as claimed in claim 10, wherein the step of dividing the 3D models into a sequence of meshes includes the step of decoding one or more compressed mesh sequences.
13. The method for visualization of a plurality of interactive 3D virtual objects as claimed in claim 11, wherein the step of decoding using a mesh decoder one or more compressed mesh sequences from the said static 3D models, said mesh sequences having vertex indices.
14. The method for visualization of a plurality of interactive 3D virtual objects as claimed in claim 12, wherein the step of decoding one or more compressed mesh sequences includes the step of decoding the color and position of each mesh.
15. The method for visualization of a plurality of interactive 3D virtual objects as claimed in claim 10, wherein the step of detecting user movements include detecting user scrolling and magnifying the rendered 3D virtual object.
16. The method for visualization of a plurality of interactive 3D virtual objects as claimed in claim 10, wherein the step of detecting user movements include capturing the one or more user actions using a hardware camera and providing the one or more user actions as feedback to the renderer.
17. The method for visualization of a plurality of interactive 3D virtual objects as claimed in claim 15, wherein the step of magnifying the rendered 3D virtual object includes the step of storing the magnification values in the controller.
18. The method for visualization of a plurality of interactive 3D virtual objects as claimed in claim 15, wherein the step of magnifying the rendered 3D virtual object includes the step of storing the magnification values in the display device.
| # | Name | Date |
|---|---|---|
| 1 | 201921037925-FORM-26 [18-09-2024(online)].pdf | 2024-09-18 |
| 1 | 201921037925-IntimationOfGrant23-01-2025.pdf | 2025-01-23 |
| 1 | 201921037925-PROVISIONAL SPECIFICATION [20-09-2019(online)].pdf | 2019-09-20 |
| 2 | 201921037925-FORM FOR STARTUP [20-09-2019(online)].pdf | 2019-09-20 |
| 2 | 201921037925-PatentCertificate23-01-2025.pdf | 2025-01-23 |
| 2 | 201921037925-PETITION UNDER RULE 137 [18-09-2024(online)].pdf | 2024-09-18 |
| 3 | 201921037925-FORM FOR SMALL ENTITY(FORM-28) [20-09-2019(online)].pdf | 2019-09-20 |
| 3 | 201921037925-FORM-26 [18-09-2024(online)].pdf | 2024-09-18 |
| 3 | 201921037925-RELEVANT DOCUMENTS [18-09-2024(online)].pdf | 2024-09-18 |
| 4 | 201921037925-Written submissions and relevant documents [18-09-2024(online)].pdf | 2024-09-18 |
| 4 | 201921037925-PETITION UNDER RULE 137 [18-09-2024(online)].pdf | 2024-09-18 |
| 4 | 201921037925-FORM 1 [20-09-2019(online)].pdf | 2019-09-20 |
| 5 | 201921037925-RELEVANT DOCUMENTS [18-09-2024(online)].pdf | 2024-09-18 |
| 5 | 201921037925-FORM-26 [05-09-2024(online)].pdf | 2024-09-05 |
| 5 | 201921037925-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [20-09-2019(online)].pdf | 2019-09-20 |
| 6 | 201921037925-Written submissions and relevant documents [18-09-2024(online)].pdf | 2024-09-18 |
| 6 | 201921037925-EVIDENCE FOR REGISTRATION UNDER SSI [20-09-2019(online)].pdf | 2019-09-20 |
| 6 | 201921037925-Correspondence to notify the Controller [03-09-2024(online)].pdf | 2024-09-03 |
| 7 | 201921037925-US(14)-HearingNotice-(HearingDate-06-09-2024).pdf | 2024-08-14 |
| 7 | 201921037925-FORM-26 [05-09-2024(online)].pdf | 2024-09-05 |
| 7 | 201921037925-DRAWING [20-09-2020(online)].pdf | 2020-09-20 |
| 8 | 201921037925-CLAIMS [19-03-2024(online)].pdf | 2024-03-19 |
| 8 | 201921037925-Correspondence to notify the Controller [03-09-2024(online)].pdf | 2024-09-03 |
| 8 | 201921037925-CORRESPONDENCE-OTHERS [20-09-2020(online)].pdf | 2020-09-20 |
| 9 | 201921037925-COMPLETE SPECIFICATION [19-03-2024(online)].pdf | 2024-03-19 |
| 9 | 201921037925-COMPLETE SPECIFICATION [20-09-2020(online)].pdf | 2020-09-20 |
| 9 | 201921037925-US(14)-HearingNotice-(HearingDate-06-09-2024).pdf | 2024-08-14 |
| 10 | 201921037925-CLAIMS [19-03-2024(online)].pdf | 2024-03-19 |
| 10 | 201921037925-DRAWING [19-03-2024(online)].pdf | 2024-03-19 |
| 10 | 201921037925-FORM-26 [22-12-2020(online)].pdf | 2020-12-22 |
| 11 | 201921037925-COMPLETE SPECIFICATION [19-03-2024(online)].pdf | 2024-03-19 |
| 11 | 201921037925-FER_SER_REPLY [19-03-2024(online)].pdf | 2024-03-19 |
| 11 | Abstract1.jpg | 2021-10-19 |
| 12 | 201921037925-DRAWING [19-03-2024(online)].pdf | 2024-03-19 |
| 12 | 201921037925-OTHERS [19-03-2024(online)].pdf | 2024-03-19 |
| 12 | 201921037925-STARTUP [19-03-2023(online)].pdf | 2023-03-19 |
| 13 | 201921037925-FORM28 [19-03-2023(online)].pdf | 2023-03-19 |
| 13 | 201921037925-FER_SER_REPLY [19-03-2024(online)].pdf | 2024-03-19 |
| 13 | 201921037925-FER.pdf | 2023-09-20 |
| 14 | 201921037925-FORM 18A [19-03-2023(online)].pdf | 2023-03-19 |
| 14 | 201921037925-OTHERS [19-03-2024(online)].pdf | 2024-03-19 |
| 15 | 201921037925-FER.pdf | 2023-09-20 |
| 15 | 201921037925-FORM28 [19-03-2023(online)].pdf | 2023-03-19 |
| 16 | 201921037925-FORM 18A [19-03-2023(online)].pdf | 2023-03-19 |
| 16 | 201921037925-OTHERS [19-03-2024(online)].pdf | 2024-03-19 |
| 16 | 201921037925-STARTUP [19-03-2023(online)].pdf | 2023-03-19 |
| 17 | Abstract1.jpg | 2021-10-19 |
| 17 | 201921037925-FER_SER_REPLY [19-03-2024(online)].pdf | 2024-03-19 |
| 17 | 201921037925-FORM28 [19-03-2023(online)].pdf | 2023-03-19 |
| 18 | 201921037925-STARTUP [19-03-2023(online)].pdf | 2023-03-19 |
| 18 | 201921037925-FORM-26 [22-12-2020(online)].pdf | 2020-12-22 |
| 18 | 201921037925-DRAWING [19-03-2024(online)].pdf | 2024-03-19 |
| 19 | 201921037925-COMPLETE SPECIFICATION [19-03-2024(online)].pdf | 2024-03-19 |
| 19 | 201921037925-COMPLETE SPECIFICATION [20-09-2020(online)].pdf | 2020-09-20 |
| 19 | Abstract1.jpg | 2021-10-19 |
| 20 | 201921037925-CLAIMS [19-03-2024(online)].pdf | 2024-03-19 |
| 20 | 201921037925-CORRESPONDENCE-OTHERS [20-09-2020(online)].pdf | 2020-09-20 |
| 20 | 201921037925-FORM-26 [22-12-2020(online)].pdf | 2020-12-22 |
| 21 | 201921037925-US(14)-HearingNotice-(HearingDate-06-09-2024).pdf | 2024-08-14 |
| 21 | 201921037925-DRAWING [20-09-2020(online)].pdf | 2020-09-20 |
| 21 | 201921037925-COMPLETE SPECIFICATION [20-09-2020(online)].pdf | 2020-09-20 |
| 22 | 201921037925-Correspondence to notify the Controller [03-09-2024(online)].pdf | 2024-09-03 |
| 22 | 201921037925-CORRESPONDENCE-OTHERS [20-09-2020(online)].pdf | 2020-09-20 |
| 22 | 201921037925-EVIDENCE FOR REGISTRATION UNDER SSI [20-09-2019(online)].pdf | 2019-09-20 |
| 23 | 201921037925-DRAWING [20-09-2020(online)].pdf | 2020-09-20 |
| 23 | 201921037925-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [20-09-2019(online)].pdf | 2019-09-20 |
| 23 | 201921037925-FORM-26 [05-09-2024(online)].pdf | 2024-09-05 |
| 24 | 201921037925-EVIDENCE FOR REGISTRATION UNDER SSI [20-09-2019(online)].pdf | 2019-09-20 |
| 24 | 201921037925-FORM 1 [20-09-2019(online)].pdf | 2019-09-20 |
| 24 | 201921037925-Written submissions and relevant documents [18-09-2024(online)].pdf | 2024-09-18 |
| 25 | 201921037925-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [20-09-2019(online)].pdf | 2019-09-20 |
| 25 | 201921037925-FORM FOR SMALL ENTITY(FORM-28) [20-09-2019(online)].pdf | 2019-09-20 |
| 25 | 201921037925-RELEVANT DOCUMENTS [18-09-2024(online)].pdf | 2024-09-18 |
| 26 | 201921037925-PETITION UNDER RULE 137 [18-09-2024(online)].pdf | 2024-09-18 |
| 26 | 201921037925-FORM FOR STARTUP [20-09-2019(online)].pdf | 2019-09-20 |
| 26 | 201921037925-FORM 1 [20-09-2019(online)].pdf | 2019-09-20 |
| 27 | 201921037925-PROVISIONAL SPECIFICATION [20-09-2019(online)].pdf | 2019-09-20 |
| 27 | 201921037925-FORM-26 [18-09-2024(online)].pdf | 2024-09-18 |
| 27 | 201921037925-FORM FOR SMALL ENTITY(FORM-28) [20-09-2019(online)].pdf | 2019-09-20 |
| 28 | 201921037925-PatentCertificate23-01-2025.pdf | 2025-01-23 |
| 28 | 201921037925-FORM FOR STARTUP [20-09-2019(online)].pdf | 2019-09-20 |
| 29 | 201921037925-IntimationOfGrant23-01-2025.pdf | 2025-01-23 |
| 29 | 201921037925-PROVISIONAL SPECIFICATION [20-09-2019(online)].pdf | 2019-09-20 |
| 30 | 201921037925-PETITION UNDER RULE 137 [22-10-2025(online)].pdf | 2025-10-22 |
| 1 | SearchHistoryE_20-09-2023.pdf |