Sign In to Follow Application
View All Documents & Correspondence

A System And Method For Context Based Content Retrieval In An Online Video Streamig

Abstract: ABSTRACT A SYSTEM AND METHOD FOR CONTEXT-BASED CONTENT RETRIEVAL IN ONLINE VIDEO STREAMING Disclosed is a system and a method for context-based content retrieval in online video streaming. The processor (201) is configured for receiving a streaming video. The processor (201) is configured for transforming raw data obtained from an imagery in one or more scenes in the streaming video into structured data. The processor (201) is configured for identifying an object and/or event of interest from the structured data in order to determine a context of the one or more scenes in the streaming video. The processor (201) is configured for fetching a relevant media content based on context of the streaming video. The processor (201) is configured for superimposing, the relevant media content either directly or subtly or in the background in the streaming video. The processor (201) is configured to displaying the relevant media content in the streaming video on a user device associated with a user. [To be published with Figure 1]

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
29 December 2018
Publication Number
27/2020
Publication Type
INA
Invention Field
COMMUNICATION
Status
Email
ip@stratjuris.com
Parent Application

Applicants

ZENSAR TECHNOLOGIES LIMITED
ZENSAR KNOWLEDGE PARK, PLOT # 4, MIDC, KHARADI, OFF NAGAR ROAD, PUNE-411014, MAHARASHTRA, INDIA

Inventors

1. RAMANAGAR, Vijay Muniraj
s/o BettaSwamy Gowda, #125 , near anjanaya Swamy temple, Tadikavagilu, Ramanagar District, Ramanagar -562159, Karnataka, India

Specification

DESC:FORM 2

THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003

COMPLETE SPECIFICATION

(See Section 10 and Rule 13)

Title of invention:
A SYSTEM AND METHOD FOR CONTEXT BASED CONTENT RETRIEVAL IN AN ONLINE VIDEO STREAMING

APPLICANT
Zensar Technologies Limited.
(An Indian entity having address)
Zensar Knowledge Park,
Plot # 4, MIDC, Kharadi, Off
Nagar Road, Pune-411014,
Maharashtra, India

The following specification describes the invention and the manner in which it is to be performed.

CROSS-REFERENCE TO RELATED APPLICATIONS AND PRIORITY
The present application claims priority from Indian Provisional Patent Application No. 201821049817 filed on 29th December 2018, the entirety of which is incorporate herein by a reference.

TECHNICAL FIELD
The present subject matter described herein, in general, relates to a system and a method for context-based content retrieval in an online video streaming.

BACKGROUND
The subject matter discussed in the background section should not be assumed to be prior art merely because of its mention in the background section. Similarly, a problem mentioned in the background section or associated with the subject matter of the background section should not be assumed to have been previously recognized in the prior art. The subject matter in the background section merely represents different approaches, which in and of themselves may also correspond to implementations of the claimed technology.

In recent times, online video streaming has grown tremendously. These online video streaming platforms/services have access to extremely large audiences due to growth of digitization. Therefore, online video streaming platforms/services are used for digital marketing and specific advertisements as well. These streaming services have access to accurate and specific viewer data that can be used to target potential clients more directly. These streaming services use user browser history and user profile data to run marketing campaigns.

In the state of art, all available methods of advertisement during broadcasting or telecasting of the video clip, particularly in online videos are displayed based on video tags or user browser history. Further, advertisements are also displayed during set intervals, and for set durations. This approach is limited in terms of placement of advertisement at set interval without considering the content being displayed in a real time. These repetitive advertisements generally irritate the user and distract the user from the theme of the video. Therefore, repetitive advertisement is a headache for the consumers. Further, it is wastage of resources from the advertiser’s point of view, as it fails to attract consumer.

Further, the existing systems fails to provide efficient storage and retrieval for context-based advertisements. Due to poor content retrieval facility, the quality of advertising is questionable with the increase in the consumer’s time spent on the streaming video. Therefore, the quality of advertising remains a major issue.

Therefore, there is a long-standing need of efficient and improved system and method for context-based content retrieval in online video streaming.

SUMMARY
This summary is provided to introduce concepts related to a system and a method for context-based content retrieval in an online video streaming and the concepts are further described below in the detailed description. This summary is not intended to identify essential features of the claimed subject matter nor is it intended for use in determining or limiting the scope of the claimed subject matter.

In one implementation, a system for context-based content retrieval in an online video streaming is disclosed. The system may comprise a processor and a memory coupled with the processor. The processor may further be configured to execute programmed instructions stored in the memory. The processor may further be configured to execute the programmed instructions for receiving a streaming video. Further, the processor may be configured to execute the programmed instructions for transforming raw data obtained from an imagery in one or more scenes in the streaming video into structured data. The processor may be configured to execute the programmed instructions for identifying one or more objects and/or events of interest from the structured data in order to determine a context of the one or more scenes in the streaming video. The processor may be further configured to execute the programmed instructions for fetching a relevant media content based on context of the one or more scenes in the streaming video. Further, the processor may be configured to execute the programmed instructions for superimposing the relevant media content either directly or subtly or in the background in the streaming video. Furthermore, the processor may be configured to execute the programmes instructions for displaying, the relevant media content in the streaming video on a user device associated with a user.

In another implementation, a method for context-based content retrieval in an online video streaming is disclosed. The method may comprise receiving, via a processor, a streaming video. The method may further comprise transforming, via the processor, raw data obtained from an imagery in one or more scenes in the streaming video into structured data. The method may further comprise identifying, via the processor, one or more objects and/or events of interest from the structured data in order to determine a context of the one or more scenes in the streaming video. The method may further comprise fetching, via the processor, a relevant media content based on context of the one or more scenes of the streaming video. The method may further comprise superimposing, via the processor, the relevant media content either directly or subtly or in the background in the streaming video. Furthermore, the method may comprise displaying, via the processor, the relevant media content in the streaming video on a user device associated with a user.

BRIEF DESCRIPTION OF DRAWINGS

The detailed description is described with reference to the accompanying Figures. The same numbers are used throughout the drawings to refer like features and components.

Figure 1 illustrates an implementation 100 of a system 101 for context-based content retrieval in an online video streaming, in accordance with an embodiment of the present disclosure.
Figure 2 illustrates an implementation 300 of a video analytics for context-based content retrieval in an online video streaming, in accordance with an embodiment of the present disclosure.

Figure 3 illustrates a method 400 for context-based content retrieval in an online video streaming, in accordance with an embodiment of the present disclosure.

DETAILED DESCRIPTION

Reference throughout the specification to “various embodiments,” “some embodiments,” “one embodiment,” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases “in various embodiments,” “in some embodiments,” “in one embodiment,” or “in an embodiment” in places throughout the specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures or characteristics may be combined in any suitable manner in one or more embodiments.

Referring to Figure 1, a network implementation (100) of system (101) for context-based content retrieval in an online video streaming is illustrated, in accordance with an embodiment of the present subject matter.

In an embodiment, the system (101) may be connected to a user device (103) over a network (102). It may be understood that the system (101) may be accessed by multiple users through one or more user devices (103-1), (103-2), (103-3)….(103-n), collectively referred to as a user device (103). The user device (103) may be any electronic device, communication device, image capturing device, machine, software, automated computer program, a robot or a combination thereof.

In an embodiment, though the present subject matter is explained considering that the system (101) is implemented on a server, it may be understood that the system (101) may also be implemented in a variety of user devices, such as, but not limited to, a portable computer, a personal digital assistance, a handheld device, a mobile, a laptop computer, a desktop computer, a notebook, a workstation, a mainframe computer, a mobile device, and the like. In one embodiment, system (101) may be implemented in a cloud-computing environment. In an embodiment, the network (102) may be a wireless network such as Bluetooth, Wi-Fi, 3G, 4G/LTE and alike, a wired network or a combination thereof. The network (102) can be accessed by the user device (103) using wired or wireless network connectivity means including updated communications technology.

In one embodiment, the network (102) can be implemented as one of the different types of networks, cellular communication network, local area network (LAN), wide area network (WAN), the internet, and the like. The network (102) may either be a dedicated network or a shared network. The shared network represents an association of the different types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), Wireless Application Protocol (WAP), and the like, to communicate with one another. Further, the network (102) may include a variety of network devices, including routers, bridges, servers, computing devices, storage devices, and the like.

Further, referring to Figure 1, various components of the system (101) are illustrated, in accordance with an embodiment of the present subject matter. As shown, the system (101) may include at least one processor (201), an input/output interface (203), a memory (205), programmed instructions (207) and data (209). In one embodiment, the at least one processor (201) is configured to fetch and execute computer-readable instructions stored in the memory (205).

In one embodiment, the I/O interface (203) implemented as a mobile application or a web-based application and may further include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like. The I/O interface (203) may allow the system (101) to interact with the user devices (103). Further, the I/O interface (203) may enable the user device (103) to communicate with other computing devices, such as web servers and external data servers (not shown). The I/O interface (203) can facilitate multiple communications within a wide variety of networks and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. The I/O interface (203) may include one or more ports for connecting to another server. In an exemplary embodiment, the I/O interface (203) is an interaction platform which may provide a connection between users and system (101).

In an implementation, the memory (205) may include any computer-readable medium known in the art including, for example, volatile memory, such as static random-access memory (SRAM) and dynamic random-access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and memory cards. The memory (205) may include data (209).

In one embodiment, the programmed instructions (207) may include, routines, programmes, objects, components, data structures, etc. which perform particular tasks, functions, or implement particular abstract data types. The data (209) may comprise a data repository (211), database (213) and other data (215). In one embodiment, the database (213) may comprise data of advertisements associated with a plurality of images. The other data (215) amongst other things, serves as a repository for storing data processed, received, and generated by one or more components and programmed instructions.

The aforementioned computing devices may support communication over one or more types of networks in accordance with the described embodiments. For example, some computing devices and networks may support communications over a Wide Area Network (WAN), the Internet, a telephone network (e.g., analog, digital, POTS, PSTN, ISDN, xDSL), a mobile telephone network (e.g., CDMA, GSM, NDAC, TDMA, E-TDMA, NAMPS, WCDMA, CDMA-2000, UMTS, 3G, 4G), a radio network, a television network, a cable network, an optical network (e.g., PON), a satellite network (e.g., VSAT), a packet-switched network, a circuit-switched network, a public network, a private network, and/or other wired or wireless communications network configured to carry data. Computing devices and networks also may support wireless wide area network (WWAN) communications services including Internet access such as EV-DO, EV-DV, CDMA/1×RTT, GSM/GPRS, EDGE, HSDPA, HSUPA, and others.

The aforementioned computing devices and networks may support wireless local area network (WLAN) and/or wireless metropolitan area network (WMAN) data communications functionality in accordance with Institute of Electrical and Electronics Engineers (IEEE) standards, protocols, and variants such as IEEE 802.11 (“WiFi”), IEEE 802.16 (“WiMAX”), IEEE 802.20x (“Mobile-Fi”), and others. Computing devices and networks also may support short range communication such as a wireless personal area network (WPAN) communication, Bluetooth® data communication, infrared (IR) communication, near-field communication, electromagnetic induction (EMI) communication, passive or active RFID communication, micro-impulse radar (MIR), ultra-wide band (UWB) communication, automatic identification and data capture (AIDC) communication, and others.

The working of the system (101) for will now be described in detail referring to Figures 1,2 and 3 as below:

In one embodiment, the user may select a streaming video from a multiple streaming video available on a streaming video service. In one embodiment, the processor (201) may be configured for receiving the streaming video. The streaming video may comprise one or more scenes appeared in at least one frame of the streaming video.

The processor (201) may be further configured for transforming raw data obtained from an imagery in one or more scenes in the streaming video into structured data.
In one embodiment, the raw data may be an input data. In one embodiment, the structured data may comprise a set of images or imagery or frames of the streaming video labelled with the objects and/or events. The processor (201) may be configured for transforming the raw data obtained from the imagery in one or more scenes in the streaming video into the structured data by executing a video analytics stored in the memory (205). The objective of the conversion of the raw data into structured data is to enable searching of desired objects and/or events of interest in the steaming video.

Now referring to Figure 2, an implementation (300) of the video analytics for context-based content retrieval in an online video streaming, is illustrated in accordance with an embodiment with the present subject matter. In one embodiment, the video analytics comprises a Video Analytics Engine (301), a Media Content Placement Engine (302).In one embodiment, the Video Analytics Engine (301) may be configured to transform the raw data obtained from the imagery in the one or more scenes of the streaming video into the structured data. In one exemplary embodiment, one or more party scenes of the streaming video may comprise an imagery including, but are not limited to, cold drinks, Music band, lights, pool, and the like. The Video Analytics Engine (301) may label the imagery with one or more objects and/or events. For example, in one scenario, the Video Analytics Engine (301) may label the cold drink, music band, light and pool of the party scene as the objects and/or events in the one or more scenes..
The processor (201) may be configured to execute the programmed instructions for identifying an object and/or event of interest from the structured data in order to determine a context of the one or more scenes in the streaming video. The Video Analytics Engine (301) may be configured to identify potential objects, or events of interest from the imagery in the one or more scenes of the streaming video. In one exemplary embodiment, the Video Analytics Engine (301) may identify cold drink as an object of interest or a potential object, or an event of interest from the labelled objects. Accordingly, the Video Analytics Engine (301) may determine the context of a party scene in this case. In another exemplary embodiment, the Video Analytics Engine (301) may identify accident as an event of interest in one or more crime scenes of the streaming video. Accordingly, the Video Analytics Engine (301) may determine the context of a crime scene in this case.

In one embodiment, the Video Analytics Engine (301) may be configured to determine the context of the one or more scenes in the streaming video using a technique of video analytics selected from a group comprising Artificial Intelligence, Deep machine learning and computer vision or combination thereof.

In one embodiment, the Video Analytics Engine (301) may be configured to take an input data of set of images or frames of the streaming video labelled with the objects/events in order to detect the object/event of interest. It must be noted herein that the Video Analytics Engine (301) may be configured to select an appropriate neural network model and train the neural network model by picking the right hyper-parameters for achieving best Artificial intelligence model accuracy.

In one embodiment, the Deep Neural Networks (DNNs) may be used to train the Video Analytics Engine (301). The Video Analytics Engine (301) may be configured for identifying specific objects/events in an image in the scene of the streaming video and tracking corresponding frames. The model based on the DNNs may be configured for identifying an exact area of a frame or position of the image which consist of desired objects/events in the streaming video. In one exemplary embodiment, the desired object in the party scene of the streaming video is a cold drink. In another exemplary embodiment, the desired event in the scene of the streaming video may be an accident.

The Video Analytics Engine (301) may be configured to help user to quickly highlight preferred subjects and objects of interest, detect and deliver subjects or objects of the imagery in the scene. In one exemplary embodiment, the Video Analytics Engine (301) is designed to help commercial and public advertisements to make better leverage video to enhance better placement of Advertisements in any online video content with Publisher consent.

Now again referring to Figure 1 and 2, the processor (201) may be configured to execute programmed instructions (207) for fetching a relevant media content based on the context of the one or more scenes of the streaming video. In one embodiment, the relevant media content comprises one or more of an advertisement, a sub-liminal message, and the like. The processor (201) may execute the programmed instructions (207) for comparing the context of the one or more scenes of the streaming video with multiple media content in order to fetch the relevant media content. The processor (201) may execute the programmed instructions (207) for fetching the relevant media content from the multiple media contents pre-stored either in the data repository (211) or a third-party server or an external database communicatively coupled with the processor (201). In one exemplary embodiment, the processor (201) may be configured to fetch an advertisement of cold drink based on context of the one or more party scenes. In another exemplary embodiment, the processor (201) may be configured to fetch legal support advertisement or social awareness messages for the accident based on context of the one or more accident scenes. In another exemplary embodiment, the processor (201) may configured to fetch health awareness messages for a cigarette based on the context of one or more smoking scenes.

The processor (201) may be configured to execute programmed instructions (207) for superimposing the relevant media content either directly or subtly or in the background in the streaming video. In one embodiment, the Video Analytics Engine (301) may be configured for transferring the parameters such as a frame location and object/event of interest details to the Media Content placement Engine (302) in order to superimpose the relevant media content either directly or subtly or in the background in the streaming video.

The processor (201) may be configured to execute programmed instructions (207) for displaying the relevant media content in the streaming video on the user device associated with the user. In one exemplary embodiment, the processor (201) may be configured to display the advertisement of cold drink based on context of the one or more party scenes in the streaming video. In one exemplary embodiment, the processor (201) may be configured for displaying legal support advertisement or social awareness messages for the accident in one or more crime scenes in the streaming video. In another exemplary embodiment, the processor (201) may be configured for displaying health awareness messages for a cigarette based on the context of the one or more smoking scenes in the streaming video.

In one exemplary embodiment, if in a video clip a scene shows that a character in the video may be standing before a travel agency, or the scene relates to travel, then an advertisement may be subtly placed, in relation to travel services. The processor (201) may receive travel streaming video and transform the raw data obtained from the imagery in one or more traveling scenes in the streaming video into the structured data. In one embodiment, one or more travelling scenes of the streaming video may comprise an imagery such as travel bags, airport etc. The Video Analytics Engine (301) may label the imagery with the objects. The processor (201) may identify airport as object of interest or potential object, or event of interest from the labelled objects. The processor (201) may determine the context of the one or more scenes in the streaming video by the technique of video analytics selected from group of Artificial Intelligence, Deep machine learning and computer vision or combination thereof. The processor (201) may be configured for comparing the context of the one or more traveling scenes or items in the scene for advertisements stored in a data repository (211). The processor may be further configured to fetch advertisement of airlines based on the context of the one or more traveling scenes of the streaming video from the data repository (211) and insert it directly or subtly in the streaming video.

In another exemplary embodiment, a video clip showing a character smoking or drinking is disclosed. The processor (201) may receive one or more smoking or drinking scenes of the streaming video comprising an imagery such as cigarette, alcohol etc. The Video Analytics Engine (301) may label the imagery with the objects. The processor (201) may identify cigarette as object of interest or potential object, or event of interest from the labelled objects. The processor (201) determine the context of the one or more scenes in the streaming video by the technique of video analytics selected from group of Artificial Intelligence, Deep machine learning and computer vision or combination thereof. The processor (201) may configured to fetch appropriate advertisements from the data repository, highlighting the danger and side effects of smoking and drinking. These defects advertisements may be super imposed during the video streaming either directly or in the background or subtly. If a crime scene is running and the theme relates to some criminal activity which is being shown, then the advertisements fetched/extracted from the data repository can be related to a helpline or legal support in relation to crime in general or the crime scene in particular.

Now, referring to figure 3, a method (400) for context-based content retrieval in online video streaming, is illustrated in accordance with the embodiments of the present disclosure.

At step 401, the processor (201) may be configured for receiving a streaming video.
At step 402, the processor (201) may be configured for transforming raw data obtained from the imagery in one or more scenes in the streaming video into structured data.
At step 403, the processor (201) may be configured for identifying object and/or events of interest from the structured data in order to determine the context of the one or more scenes in the streaming video.
At step 404, the processor (201) may be configured for fetching relevant media content based on context of the streaming video.
At step 405, the processor (201) may be configured for superimposing the relevant media content either directly or subtly or in the background in the streaming video.
At step 406, the processor (201) may be configured for displaying the relevant media content in the streaming video on the user device associated with the user.
Some embodiments of the present disclosure may provide efficient retrieval of the media based on the analysed content and context of the streaming video in a real time.
The embodiments, examples and alternatives of the preceding paragraphs or the description and drawings, including any of their various aspects or respective individual features, may be taken independently or in any combination. Features described in connection with one embodiment are applicable to all embodiments, unless such features are incompatible.

Although implementations for the system (101) the method (400) for context-based content retrieval in online video streaming have been described in language specific to structural features and/or methods, it is to be understood that the approached claims are not necessarily limited to the specific features or methods described. Rather, the specific features and method are disclosed as examples of implementations for the system (101) the method (400) for context-based content retrieval in online video streaming.
,CLAIMS:WE CLAIM:
1. A system (101) for context-based content retrieval in online video streaming, the system comprising:
a processor (201); and
a memory (205) coupled with the processor (201), wherein the processor (201) is configured to execute programmed instructions stored in the memory for:
receiving a streaming video;
transforming raw data obtained from an imagery in one or more scenes in the streaming video into structured data;
identifying one or more objects and/or events of interest from the structured data in order to determine a context of the one or more scenes in the streaming video;
fetching a relevant media content based on the context of the one or more scenes in the streaming video;
superimposing the relevant media content either directly or subtly or in the background in the streaming video; and
displaying, the relevant media content in the streaming video on a user device associated with a user.

2. The system as claimed in claim 1, wherein the one or more objects and/or events of interest are identified to determine the context of the one or more scenes in the streaming video by using a technique of video analytics selected from a group comprising Artificial Intelligence, Deep machine learning and computer vision or combination thereof.

3. The system as claimed in claim 1, wherein the processor (201) further executes the programmed instructions for comparing the context of the one or more scenes in the streaming video with multiple media contents in order to fetch the relevant media content.

4. The system as claimed in claim 3, wherein the processor (201) further executes the programmed instruction for fetching the relevant media content from the multiple media content pre-stored either in a data repository (211) or a third-party server or an external database communicatively coupled with the processor.

5. The system as claimed in claim 1, wherein the processor (201) superimposes the relevant media content based upon parameters selected from at least one of a frame location and details of the object/event of interest.

6. A method (400) for context-based content retrieval in online video streaming, the method (400) comprising:
receiving, via a processor (201), a streaming video;
transforming, via the processor (201), raw data obtained from an imagery in one or more scenes in the streaming video into structured data;
identifying, via the processor (201), one or more objects and/or events of interest from the structured data in order to determine a context of the one or more scenes in the streaming video;
fetching, via the processor (201), a relevant media content based on the context of the one or more scenes in the streaming video;
superimposing, via the processor (201), the relevant media content either directly or subtly or in the background in the streaming video; and
displaying, via the processor (201), the relevant media content in the streaming video on a user device associated with a user.

7. The method as claimed in claim 6, wherein the one or more objects and/or events of interest are identified to determine the context of the one or more scenes in the streaming video by a technique of video analytics selected from group comprising Artificial Intelligence, Deep machine learning and computer vision or combination thereof.

8. The method as claimed in claim 6, further comprising comparing, via the processor (201), the context of the one or more scenes in the streaming video with multiple media contents in order to fetch the relevant media content.

9. The method as claimed in claim 8, configured for fetching the relevant media content from the multiple media content pre-stored either in a data repository or a third-party server or an external database communicatively coupled with the processor (201).

10. The method as claimed in claim 6, wherein the relevant media content is superimposed based upon parameters comprising at least one of a frame location and details of the object/event of interest.

Dated this 28th of December 2019

Documents

Application Documents

# Name Date
1 201821049817-STATEMENT OF UNDERTAKING (FORM 3) [29-12-2018(online)].pdf 2018-12-29
2 201821049817-PROVISIONAL SPECIFICATION [29-12-2018(online)].pdf 2018-12-29
3 201821049817-PROOF OF RIGHT [29-12-2018(online)].pdf 2018-12-29
4 201821049817-POWER OF AUTHORITY [29-12-2018(online)].pdf 2018-12-29
5 201821049817-FORM 1 [29-12-2018(online)].pdf 2018-12-29
6 201821049817-DECLARATION OF INVENTORSHIP (FORM 5) [29-12-2018(online)].pdf 2018-12-29
7 201821049817-Proof of Right (MANDATORY) [07-05-2019(online)].pdf 2019-05-07
8 201821049817-RELEVANT DOCUMENTS [28-12-2019(online)].pdf 2019-12-28
9 201821049817-FORM 18 [28-12-2019(online)].pdf 2019-12-28
10 201821049817-FORM 13 [28-12-2019(online)].pdf 2019-12-28
11 201821049817-ENDORSEMENT BY INVENTORS [28-12-2019(online)].pdf 2019-12-28
12 201821049817-DRAWING [28-12-2019(online)].pdf 2019-12-28
13 201821049817-CORRESPONDENCE-OTHERS [28-12-2019(online)].pdf 2019-12-28
14 201821049817-COMPLETE SPECIFICATION [28-12-2019(online)].pdf 2019-12-28
15 201821049817-ORIGINAL UR 6(1A) FORM 1-080519.pdf 2019-12-31
16 Abstract1.jpg 2020-01-01
17 201821049817-FORM 3 [17-04-2020(online)].pdf 2020-04-17
18 201821049817-OTHERS [20-08-2021(online)].pdf 2021-08-20
19 201821049817-FER_SER_REPLY [20-08-2021(online)].pdf 2021-08-20
20 201821049817-CLAIMS [20-08-2021(online)].pdf 2021-08-20
21 201821049817-FER.pdf 2021-10-18
22 201821049817-US(14)-HearingNotice-(HearingDate-12-01-2024).pdf 2023-12-26
23 201821049817-Correspondence to notify the Controller [10-01-2024(online)].pdf 2024-01-10

Search Strategy

1 SEARCHAMENDED9817AE_25-10-2021.pdf
2 search9817E_03-03-2021.pdf