Sign In to Follow Application
View All Documents & Correspondence

Video Capturing Assistive Device

Abstract: A video capturing assistive device, comprising a cuboidal body 101 positioned over a ground surface of an enclosure by means of plurality of motorized wheels 102, a platform 103 accommodate a camera, an artificial intelligence based imaging unit 104 determines presence of people within enclosure, a database pre-saved with mapping of enclosure along with a schedule regarding time for which pictures and videos are to be captured at various portions of enclosure, an extendable sliding unit 105 provide translation to a rope 106 configured in between sliding units 105 and the rope 106 to adjust height of platform 103, a motorized roller 107 regulate angle of platform 103, a holographic projection unit 108 guide people to arrange over surface in accordance with projections.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
02 December 2024
Publication Number
1/2025
Publication Type
INA
Invention Field
ELECTRONICS
Status
Email
Parent Application

Applicants

Marwadi University
Rajkot - Morbi Road, Rajkot 360003 Gujarat, India.

Inventors

1. Dr. Nikhilkumar Jagjivan Chotai
Department of Mechanical Engineering, Marwadi University, Rajkot - Morbi Road, Rajkot 360003 Gujarat, India.
2. Dr. Nikunj Rameshkumar Mehta
Department of Mechanical Engineering, Marwadi University, Rajkot - Morbi Road, Rajkot 360003 Gujarat, India.
3. Jatin Dineshbhai Rayani
Department of Mechanical Engineering, Marwadi University, Rajkot - Morbi Road, Rajkot 360003 Gujarat, India.

Specification

Description:FIELD OF THE INVENTION

[0001] The present invention relates to a video capturing assistive device, designed to help user to capture photos and videos at events by automatically detecting the presence of people within the event's enclosure, enabling efficient, dynamic recording without manual intervention and ensuring key moments are captured with minimal effort from the user.

BACKGROUND OF THE INVENTION

[0002] Video capturing during an event is essential for documenting key moments, preserving memories, and providing an accurate record of proceedings. It allows organizers, attendees, and even those who couldn't attend to experience the event visually, offering a richer perspective compared to written summaries or photographs. Videos capture not just the visuals but also the sounds, emotions, and atmosphere, providing a more immersive experience. Additionally, video content can be used for post-event analysis, marketing, and promotional purposes. For instance, clips or highlights from events can be shared on social media to engage a wider audience. In professional settings, video recording is crucial for training, compliance, or legal purposes, offering verifiable evidence of discussions and activities. The need for video in events has grown as it enhances communication, provides long-term value, and serves as a powerful tool for reflection, education, and outreach. Thus, video capturing is an indispensable part of modern event management and media production.

[0003] Traditional methods of capturing photos and videos at events typically involve using physical cameras and camcorders operated by photographers and videographers. These professionals manually compose shots, adjust settings, and often rely on staged moments to ensure quality footage. While this method provides control over the visuals, it can be limiting in several ways. One drawback is the reliance on equipment like film cameras or early digital systems, which can be bulky, time-consuming to set up, and prone to technical failures. Moreover, these setups often lack real-time flexibility, meaning key moments may be missed if the photographer is focused on specific shots or angles. Additionally, traditional methods require significant human labor for both capturing and editing, leading to longer production times. Another challenge is limited coverage, as cameras can only capture specific angles or moments, while attendees might experience the event differently than what's documented. This restricts the variety of perspectives and spontaneity.

[0004] EP4443166A1 provides a camera holding device for an automatic analysis apparatus, capable of easily adjusting tip positions of pipetting nozzles having different lengths, even with the use of a single single-focus camera. Therefore, the present invention is a camera holding device for an automatic analysis apparatus, which holds a camera to capture an image of the tip of a pipetting nozzle of the automatic analysis apparatus on a pipetting arm that moves the pipetting nozzle through rotational movement, the camera holding device including: an expansion/contraction part that expands and contracts in a vertical direction with respect to the pipetting arm; an arm-side fixing part that fixes one end of the expansion/contraction part to the pipetting arm; and a camera-side fixing part that fixes the camera to the other end of the expansion/contraction part.

[0005] US2015264228A1 relates to a case for surveillance video cameras comprising a first and a second half-shell connectable to each other to define a housing volume for at least one video camera. At least one of the first and the second half-shell comprises a transparent panel intended to be placed in front of a lens of the video camera. The case further comprises a holding device for the video camera which defines a duct conveying an air flow generated by air flow generating means towards the transparent panel. Heating means are provided for heating the generated air flow. At least one section of the conveying duct has insulating walls. The invention relates also to a video camera holding device for use in protection cases.

[0006] Conventionally, many devices are available that offer camera holding capabilities, allowing users to mount or stabilize their cameras for event photography or videography. However, these traditional devices do not have the advanced functionality of automatically detecting the presence of people within the event's enclosure, nor do they assist in capturing photos or videos based on the movement or positioning of individuals. As a result, users still rely on manual adjustments and interventions to ensure key moments are captured, often missing spontaneous interactions or requiring significant effort to track people throughout the event.

[0007] To overcome the limitations of traditional camera holding devices, there is a need in the art to develop a device that requires to aid users in capturing photos and videos at events by automatically detecting the presence of people within the event's enclosure. The developed device needs to eliminate need for constant manual adjustments and allow for seamless, dynamic recording, ensuring that key moments are captured effortlessly. By detecting and responding to the movement and positioning of individuals, it would provide a more efficient and flexible approach to event photography and videography, enhancing the overall experience for both the user and the attendees.

OBJECTS OF THE INVENTION

[0008] The principal object of the present invention is to overcome the disadvantages of the prior art.

[0009] An object of the present invention is to develop a device that assists users in capturing photos and videos during an event by automatically detecting presence of people within the event's enclosure, enabling more efficient and dynamic recording of moments without manual intervention.

[0010] Another object of the present invention is to develop a device that is capable of automatically adjusting height and angle of the camera based on the height of individuals at the event, while also allowing manual direction by the user, ensuring optimal framing and capturing of diverse perspectives for enhanced photo and video quality during the event.

[0011] Yet another object of the present invention is to develop a device that guide individuals at an event to arrange in a queue for photo or video capture, ensuring an organized and efficient process, thereby improving quality of group shots and reducing confusion or disorder during event's photography or videography sessions.

[0012] The foregoing and other objects, features, and advantages of the present invention will become readily apparent upon further review of the following detailed description of the preferred embodiment as illustrated in the accompanying drawings.

SUMMARY OF THE INVENTION

[0013] The present invention relates to a video capturing assistive device that aids users in capturing photos and videos during events by automatically detecting presence of people within the event's enclosure, enabling seamless recording of key moments without manual adjustments, ensuring efficient and dynamic coverage throughout the event.

[0014] According to an embodiment of the present invention, a video capturing assistive device, comprising a cuboidal body positioned over a ground surface of an enclosure by means of plurality of motorized wheels arranged beneath the body, a platform configured over the body to accommodate a camera, an artificial intelligence based imaging unit installed over the body determines presence of people within the enclosure, a database associated with the microcontroller that is pre-saved with mapping of the enclosure along with a schedule regarding time for which the pictures and videos are to be captured at various portions of the enclosure, an extendable sliding unit installed with each corner of the body in a vertical manner provide translation to a rope configured in between the sliding units and the ropes to adjust height of the platform, a motorized roller configured with each of the ropes that actuates to wrap and unwrap the rope regulate angle of platform, a holographic projection unit installed over the body guide the people to arrange over the surface in accordance with the projections, a proximity sensor integrated with the body monitor distance of the body from the people, animals and objects present in proximity, based on which the microcontroller regulates speed of the wheels to prevent chances of collision, wherein the microcontroller directs the projection unit to project a path over the surface to guide the people to follow the path to prevent collision with the body.

[0015] While the invention has been described and shown with particular reference to the preferred embodiment, it will be apparent that variations might be possible that would fall within the scope of the present invention.

BRIEF DESCRIPTION OF THE DRAWINGS

[0016] These and other features, aspects, and advantages of the present invention will become better understood with regard to the following description, appended claims, and accompanying drawings where:
Figure 1 illustrates a perspective view of a video capturing assistive device.

DETAILED DESCRIPTION OF THE INVENTION

[0017] The following description includes the preferred best mode of one embodiment of the present invention. It will be clear from this description of the invention that the invention is not limited to these illustrated embodiments but that the invention also includes a variety of modifications and embodiments thereto. Therefore, the present description should be seen as illustrative and not limiting. While the invention is susceptible to various modifications and alternative constructions, it should be understood, that there is no intention to limit the invention to the specific form disclosed, but, on the contrary, the invention is to cover all modifications, alternative constructions, and equivalents falling within the spirit and scope of the invention as defined in the claims.

[0018] In any embodiment described herein, the open-ended terms "comprising," "comprises,” and the like (which are synonymous with "including," "having” and "characterized by") may be replaced by the respective partially closed phrases "consisting essentially of," consists essentially of," and the like or the respective closed phrases "consisting of," "consists of, the like.

[0019] As used herein, the singular forms “a,” “an,” and “the” designate both the singular and the plural, unless expressly stated to designate the singular only.

[0020] The present invention relates to a video capturing assistive device that helps users capture photos and videos during events by automatically detecting the presence of people within the event's enclosure, enabling efficient, dynamic recording without manual adjustments, ensuring key moments are captured seamlessly as individuals move throughout the event space.

[0021] Referring to Figure 1, a perspective view of an video capturing assistive device is illustrated, comprising a cuboidal body 101 with plurality of motorized wheels 102, a platform 103 configured over the body 101, an artificial intelligence based imaging unit 104 installed over the body 101, an extendable sliding unit 105 installed with each corner of the body 101 in a vertical manner with a rope 106 attached with the platform 103, a motorized roller 107 configured with each of the rope 106, a holographic projection unit 108 installed over the body 101, plurality of extendable clamps 109 installed over the platform 103, a robotic arm 110 installed over the platform 103, and a microphone 111 mapped over the body 101.

[0022] The device proposed herein includes a cuboidal body 101 configured with a platform 103, to be positioned over a ground surface of and accessed by the user to place a camera over the platform 103 for capturing pictures and videos of an event being held at the enclosure. The body 101 as mentioned herein serves as a structural foundation to various components associated with the device, wherein the body 101 is made up of material that includes but not limited to stainless steel, which in turn ensures that the device is of generous size and is light in weight.

[0023] The body 101 is equipped with motorized wheels 102 in association with a microcontroller, wherein the wheels 102 are installed with support of multiple rod like structure to maneuver the body 101 throughout the enclosure. The supporting rods helps to maintain an optimum distance between the base of the body 101 and the enclosure for ensuring efficient operation of the device, preventing interference, and enhancing safety.

[0024] In order to activate functioning of the device, a user is required to manually switch on the device by pressing a button positioned on the body 101, wherein the button used herein is a push button. Upon pressing of the button, the circuits get closed allowing conduction of electricity that leads to activation of the device and vice versa.

[0025] Upon activation of the device by the user, an inbuilt microcontroller embedded within the body 101 and linked to the switch generates a command to activate an artificial intelligence based imaging unit 104 installed over the body 101 determines presence of people within the enclosure. The imaging unit 104 comprises of an image capturing arrangement including a set of lenses that captures multiple images in the surrounding, and the captured images are stored within memory of the imaging unit 104 in form of an optical data. The imaging unit 104 also comprises of a processor that is integrated with artificial intelligence protocols, such that the processor processes the optical data and extracts the required data from the captured images. The extracted data is further converted into digital pulses and bits and are further transmitted to the microcontroller. The microcontroller processes the received data and determines presence of people within the enclosure.

[0026] The imaging unit 104 works in sync with a laser sensor integrated with the platform 103 to determine dimensions of the camera. The laser sensor works by emitting a laser beam toward the camera. When the laser beam hits the object, it reflects back to the sensor. The sensor calculates the time taken for the laser to return, determining the distance to the camera. By measuring the reflected laser’s angle and distance, the sensor accurately calculates the camera's dimensions, such as length, width, or height and accordingly triggers the microcontroller actuate plurality of extendable clamps 109 installed over the platform 103 to grip the camera.

[0027] The extension/retraction of the extendable clamps 109 is powered pneumatically by the microcontroller by employing a pneumatic unit associated with the body 101, including an air compressor, air cylinders, air valves and piston which works in collaboration to aid in extension and retraction of the clamps 109. The pneumatic unit is operated by the microcontroller, such that the microcontroller actuates valve to allow passage of compressed air from the compressor within the cylinder, the compressed air further develops pressure against the piston and results in pushing and extending the piston. The piston is connected with the clamps 109 and due to applied pressure, the clamps 109 extends and similarly, the microcontroller retracts the clamps 109 by closing the valve resulting in retraction of the piston. Thus, the microcontroller regulates the extension/retraction of the clamps 109 in order to grip the camera over the platform 103 in a secured manner.

[0028] In response to the determined presence of people within the enclosure, the microcontroller regulates actuation of the wheels 102 to translate the body 101 within the enclosure in proximity to the people. The motorized wheels 102 comprises a pair of wheel coupled with a motor via a shaft wherein upon receiving the command from the microcontroller by the motor, the motor starts to rotate in clockwise or anti-clockwise direction in order to provide movement to the wheels 102 via the shaft. The wheels 102 thus translate the body 101 within the enclosure in proximity to the people.

[0029] The microcontroller is associated with a database that is pre-saved with mapping of the enclosure along with a schedule regarding time for which the pictures and videos are to be captured at various portions of the enclosure and the microcontroller accordingly regulates movement of the wheels 102 within the enclosure in view of allowing the camera to capture pictures and videos at various portions of the enclosure at the scheduled time.

[0030] A microphone 111 mapped over the body 101 is activated by the microcontroller to enable the user to provide voice command regarding regulation of height and angle of the platform 103. The microphone 111 contains a small diaphragm connected to a moving coil. When sound waves of the user hit the diaphragm, the coil vibrates. This causes the coil to move back and forth in the magnet's field, generating an electrical current. The signal of which are sent to the microcontroller for processing the input voice command of the user regarding regulation of height and angle of the platform 103.

[0031] In response to input commands of the user, the microcontroller actuates an extendable sliding unit 105 installed with each corner of the body 101 in a vertical manner to provide translation to a rope 106 configured in between the rope 106 and the platform 103. The extendable sliding unit 105 includes sliding rack and rail, such that the rope 106 is mounted over the rack that are electronically operated by the microcontroller for moving over the rail. The microcontroller activates the sliding unit 105 for performing the sliding operation. The sliding unit 105 is powered by a DC (direct current) motor that is activated by the microcontroller by providing required electric current to the motor. The motor comprises of a coil that converts the received electric current into mechanical force by generating magnetic field, thus the mechanical force provides the required power to the rack to provide sliding movement to the rope 106 in order to translate the rope 106 to regulates height of the platform 103 to enable the camera to capture the pictures and videos in accordance with height of the people as detected by the imaging unit 104 along with user-defined height.

[0032] A motorized roller 107 configured with each of the rope 106 is actuated by the microcontroller to wrap and unwrap the rope 106 in a coordinated manner in view of regulating angle of platform 103 in view of enabling the camera to capture the pictures and videos at multiple angles. The motorized roller 107 consists of a disc incorporated to a motor via a shaft. Upon actuation of the motorized roller 107 by the microcontroller, the motor provides the rotational force necessary to turn the disc. The speed and direction of the motor dictate the rate and direction of unwinding of the rope 106. The speed and direction of rotation of motor is regulated by the microcontroller is regulated by the microcontroller in view of enabling the camera to capture the pictures and videos at multiple angles.

[0033] Based on output of the imaging unit 104 determines number of peoples in front of the camera, a holographic projection unit 108 installed over the body 101 is activated by the microcontroller to project holographic projections over surface in front of the people to guide the people to arrange over the surface in accordance with the projections. The holographic projection unit 108 works by creating and projecting holograms, which are three dimensional images formed by the interference of light waves. Firstly, the laser light from the holographic projection unit 108 is split into two beams, the object beam which interacts with the…. and light waves are altered based on the shape and features of people and the reference beam which remains unchanged. The altered object beam and the reference beam intersect to create an interference pattern. This pattern is reordered on a photosensitive surface such as a holographic plate. The interference pattern contains information about the phase and amplitude of the light waves preserving the three-dimensional details of the people during projection, a laser beam is directed onto the recorded interference pattern diffracting the laser light, reconstructing the original wavefronts from the people and the reference beams. The reconstructed wavefronts create a three-dimensional image that appears to float in space to guide the people to arrange over the surface in accordance with the projections.

[0034] A proximity sensor integrated with the body 101 monitor distance of the body 101 from the people, animals and objects present in proximity in sync with the imaging unit 104. The proximity sensor works by emitting infrared light, which reflects off nearby objects, people, or animals. The sensor detects the reflected light and measures the time it takes for the light to return, calculating the distance based on this time delay. The sensor continuously monitors the proximity of surrounding bodies and provides real-time distance data. This information is synchronized with an imaging unit 104, which captures visual data of the detected objects. Together, the proximity sensor and imaging unit 104 provide a cohesive data for enabling the microcontroller to monitor and track the position of objects, people, and animals in real-time.

[0035] Based on position of objects, people, and animals in real-time, the microcontroller regulates speed of the wheels 102 to prevent chances of collision, wherein the microcontroller directs the projection unit 108 to project a path over the surface to guide the people to follow the path to prevent collision with the body 101.

[0036] In case the user via the microphone 111 provides voice commands regarding regulation of filters, zoom of the camera, an OCR (Optical Character Recognition) module integrated with the imaging unit 104 reads labelling on switch of the camera. The OCR (Optical Character Recognition) module works by analyzing images captured by an imaging unit 104. The imaging unit 104 captures the label on the switch, and the OCR technique processes the image, identifying patterns that resemble characters or text. It converts these patterns into machine-readable text by comparing them with stored character templates. The OCR module then extracts the label's information, such as the switch's name or settings for allowing the microcontroller to actuates a robotic arm 110 installed over the platform 103 to operate the camera in view of regulating the filters, zoom of the camera as defined by the user.

[0037] The robotic arm 110 comprises of a robotic link and a clamp attached to the link. The robotic link is made of several segments that are attached together by joints also referred to as axes. Each joint of the segments contains a step motor that rotates and allows the robotic link to complete a specific motion of the arm 110. Upon actuation of the robotic arm 110 by the microcontroller, the motor drives the movement of the clamp to operate the camera in view of regulating the filters, zoom of the camera.

[0038] Lastly, a battery is installed within the device which is connected to the microcontroller that supplies current to all the electrically powered components that needs an amount of electric power to perform their functions and operation in an efficient manner. The battery utilized here, is preferably a dry battery which is made up of Lithium-ion material that gives the device a long-lasting as well as an efficient DC (Direct Current) current which helps every component to function properly in an efficient manner. As the device is battery operated and do not need any electrical voltage for functioning. Hence the presence of battery leads to the portability of the device i.e., user is able to place as well as moves the device from one place to another as per the requirements.

[0039] The present invention works best in the following manner, where the cuboidal body 101 as disclosed in the invention is developed to be configured with the platform 103, to be positioned over the ground surface of and accessed by the user to place the camera over the platform 103 for capturing pictures and videos of the event being held at the enclosure. Upon activation of the device by the user, the inbuilt microcontroller embedded within the body 101 and linked to the switch generates the command to activate the artificial intelligence based imaging unit 104 determines presence of people within the enclosure. The imaging unit 104 works in sync with the laser sensor to determine dimensions of the camera and accordingly triggers the microcontroller to actuate plurality of extendable clamps 109 to grip the camera. In response to the determined presence of people within the enclosure, the microcontroller regulates actuation of the wheels 102 to translate the body 101 within the enclosure in proximity to the people. the microphone 111 is activated by the microcontroller to enable the user to provide voice command regarding regulation of height and angle of the platform 103. In response to input commands of the user, the microcontroller actuates the extendable sliding unit 105 in the vertical manner to provide translation to the rope 106 configured in between the rope 106 and the platform 103. the motorized roller 107 configured with each of the rope 106 is actuated by the microcontroller to wrap and unwrap the rope 106 in the coordinated manner in view of regulating angle of platform 103 in view of enabling the camera to capture the pictures and videos at multiple angles.

[0040] In continuation, based on output of the imaging unit 104 determines number of peoples in front of the camera, the holographic projection unit 108 is activated by the microcontroller to project holographic projections over surface in front of the people to guide the people to arrange over the surface in accordance with the projections. the proximity sensor integrated with the body 101 monitor distance of the body 101 from the people, animals and objects present in proximity in sync with the imaging unit 104. Based on position of objects, people, and animals in real-time, the microcontroller regulates speed of the wheels 102 to prevent chances of collision, wherein the microcontroller directs the projection unit 108 to project the path over the surface to guide the people to follow the path to prevent collision with the body 101. In case the user via the microphone 111 provides voice commands regarding regulation of filters, zoom of the camera, the OCR (Optical Character Recognition) module reads labelling on switch of the camera for allowing the microcontroller to actuates the robotic arm 110 to operate the camera in view of regulating the filters, zoom of the camera as defined by the user.

[0041] Although the field of the invention has been described herein with limited reference to specific embodiments, this description is not meant to be construed in a limiting sense. Various modifications of the disclosed embodiments, as well as alternate embodiments of the invention, will become apparent to persons skilled in the art upon reference to the description of the invention.
, C , Claims:1) A video capturing assistive device, comprising:

i) a cuboidal body 101 positioned over a ground surface of an enclosure by means of plurality of motorized wheels 102 arranged beneath said body 101, wherein a platform 103 is configured over said body 101 that is accessed by a user to position a camera over said platform 103 for capturing pictures and videos of an event being held at said enclosure;
ii) an artificial intelligence based imaging unit 104 installed over said body 101 and integrated with a processor for capturing and processing images of said enclosure, wherein based on said captured images, a microcontroller linked with said imaging unit 104 determines presence of people within said enclosure based on which said microcontroller directs said wheels 102 to translate said body 101 within said enclosure;
iii) a database associated with said microcontroller that is pre-saved with mapping of said enclosure along with a schedule regarding time for which said pictures and videos are to be captured at various portions of said enclosure, wherein based on said mapping and schedule, said microcontroller regulates movement of said wheels 102 within said enclosure;
iv) an extendable sliding unit 105 installed with each corner of said body 101 in a vertical manner and directed by said microcontroller to provide translation to a rope 106 configured with each of said sliding units 105, wherein free end of said rope 106 are attached with said platform 103 and simultaneous actuation of said sliding unit 105 regulates height of said platform 103 to enable said camera to capture said pictures and videos in accordance with height of said people as detected by said imaging unit 104;
v) a motorized roller 107 configured with each of said rope 106 that actuates to wrap and unwrap said rope 106, wherein said microcontroller directs said roller 107 to wrap and unwrap said rope 106 in a coordinated manner in view of regulating angle of platform 103 in view of enabling said camera to capture said pictures and videos at multiple angles;
vi) a holographic projection unit 108 installed over said body 101 and actuated by said microcontroller to project holographic projections over surface in front of said people to guide said people to arrange over said surface in accordance with said projections, wherein said microcontroller based on output of said imaging unit 104 determines number of peoples in front of said camera based on which said microcontroller directs said holographic projection unit 108 to project over said surface; and
vii) proximity sensor integrated with said body 101 and synced with said imaging unit 104 to monitor distance of said body 101 from said people, animals and objects present in proximity based on which said microcontroller regulates speed of said wheels 102 to prevent chances of collision, wherein said microcontroller directs said projection unit 108 to project a path over said surface to guide said people to follow said path to prevent collision with said body 101.

2) The device as claimed in claim 1, wherein a laser sensor integrated with said platform 103 and synced with said imaging unit 104 to determine dimensions of said camera based on which said microcontroller actuates plurality of extendable clamps 109 installed over said platform 103 to grip said camera.

3) The device as claimed in claim 1, wherein said microcontroller directs regulation of height and angle of said platform 103 in accordance with number of said people, light intensity in surroundings and portion of said enclosure.

4) The device as claimed in claim 1, wherein a microphone 111 is mapped over said body 101 to receive voice command of said user regarding regulation of height and angle of said platform 103 based on which said microcontroller directs said sliding units 105 and roller 107s to orient said platform 103 at said user-specified angle and height.

5) The device as claimed in claim 1, wherein in case said user via said microphone 111 provides voice commands regarding regulation of filters, zoom of said camera, said microcontroller actuates a robotic gripper installed over said platform 103 to operate said camera in view of regulating said filters, zoom of said camera as detected by an OCR (Optical Character Recognition) module integrated with said imaging unit 104.

6) The device as claimed in claim 1, wherein a battery is associated with said device for powering up electrical and electronically operated components associated with said device.

Documents

Application Documents

# Name Date
1 202421094833-STATEMENT OF UNDERTAKING (FORM 3) [02-12-2024(online)].pdf 2024-12-02
2 202421094833-REQUEST FOR EXAMINATION (FORM-18) [02-12-2024(online)].pdf 2024-12-02
3 202421094833-REQUEST FOR EARLY PUBLICATION(FORM-9) [02-12-2024(online)].pdf 2024-12-02
4 202421094833-PROOF OF RIGHT [02-12-2024(online)].pdf 2024-12-02
5 202421094833-POWER OF AUTHORITY [02-12-2024(online)].pdf 2024-12-02
6 202421094833-FORM-9 [02-12-2024(online)].pdf 2024-12-02
7 202421094833-FORM FOR SMALL ENTITY(FORM-28) [02-12-2024(online)].pdf 2024-12-02
8 202421094833-FORM 18 [02-12-2024(online)].pdf 2024-12-02
9 202421094833-FORM 1 [02-12-2024(online)].pdf 2024-12-02
10 202421094833-FIGURE OF ABSTRACT [02-12-2024(online)].pdf 2024-12-02
11 202421094833-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [02-12-2024(online)].pdf 2024-12-02
12 202421094833-EVIDENCE FOR REGISTRATION UNDER SSI [02-12-2024(online)].pdf 2024-12-02
13 202421094833-EDUCATIONAL INSTITUTION(S) [02-12-2024(online)].pdf 2024-12-02
14 202421094833-DRAWINGS [02-12-2024(online)].pdf 2024-12-02
15 202421094833-DECLARATION OF INVENTORSHIP (FORM 5) [02-12-2024(online)].pdf 2024-12-02
16 202421094833-COMPLETE SPECIFICATION [02-12-2024(online)].pdf 2024-12-02
17 Abstract.jpg 2024-12-30
18 202421094833-FORM-26 [03-06-2025(online)].pdf 2025-06-03