Sign In to Follow Application
View All Documents & Correspondence

Method And System For Carrying Out Binding Studies On Converged Infrastructure

Abstract: Molecular docking is one of the most frequently used methods in structure based drug design. Existing methods involve are very compute intensive, expensive and requires very specific skillset. A method and system for carrying out binding studies on a converged infrastructure has been provided. The system is configured to operate in three phases. In the first phase, migration of life science application in molecular docking area from classic High Performance Computing (HPC) cluster to Hadoop cluster making it easily available, manageable and cost effective. In the second phase, containers are created with application and its environment to migrate from private to public to address the peak infrastructure requirements. And the third phase is the automated cloud bursting. * To be published with FIG.1

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
20 December 2018
Publication Number
26/2020
Publication Type
INA
Invention Field
COMMUNICATION
Status
Email
kcopatents@khaitanco.com
Parent Application
Patent Number
Legal Status
Grant Date
2024-05-09
Renewal Date

Applicants

Tata Consultancy Services Limited
Nirmal Building, 9th Floor, Nariman Point Mumbai - 400021 Maharashtra, India

Inventors

1. SARAPH, Arundhati, Anupam
Tata Consultancy Services Limited Plot No. 2 & 3, MIDC-SEZ, Rajiv Gandhi Infotech Park, Hinjewadi Phase III, Pune - 411057 Maharashtra, India
2. KULKARNI, Rajesh Gopalrao
Tata Consultancy Services Limited Plot No. 2 & 3, MIDC-SEZ, Rajiv Gandhi Infotech Park, Hinjewadi Phase III, Pune - 411057, Maharashtra, India
3. BAKSHI, Mayank
Tata Consultancy Services Limited Plot No. 2 & 3, MIDC-SEZ, Rajiv Gandhi Infotech Park, Hinjewadi Phase III, Pune - 411057 Maharashtra, India

Specification

Claims:WE CLAIM:

1. A method for carrying out binding studies on a converged infrastructure, the method comprising a processor implemented steps of:

providing target proteins and compounds to be screened as an input, wherein the compounds are going to be assessed for binding to the target proteins through the process of molecular docking and each of the target proteins has their respective job length, wherein the job length varies depending on a plurality of docking parameters;
selecting a molecular docking tool for the purpose of docking of the target proteins;
migrating the molecular docking tool from a high performance computing (HPC) cluster to a Hadoop cluster;
containerizing the input and the migrated molecular docking tool using a container tool;
running a policy engine to identify the target proteins that qualify to move to a public cloud based on the job length;
bursting the containerized input and the molecular docking tool to a public cluster based on the identification of the target proteins by the policy engine; and
running the molecular docking tool on the public cloud to carry out binding studies.

2. The method of claim 1 further comprising the step of measuring a job runtime of the molecular docking tool on the public cluster.

3. The method of claim 1 further comprising the step of predicting the job length using a machine learning algorithm using historical data of various job run time.

4. The method of claim 1 wherein the policy engine further comprising a consequence based decision making module to predict if it would be worthwhile to burst out a particular job or a set of jobs considering the impact on the in-premised cluster queue.

5. The method of claim 1, wherein the policy engine also considers security parameter, license parameter, workflow complexity and the consequence on the in-premise queue to identify the target proteins for bursting.

6. The method of claim 1, wherein the container tool is Docker.

7. The method of claim 1, wherein the molecular docking tool is Autodock.

8. The method of claim 1, wherein the job length is depending upon the size of the compound used for docking and the parameters selected by a user.

9. A system for carrying out binding studies on a converged infrastructure, the system comprises:
an input module for providing target proteins and compounds to be screened as an input, wherein the compounds are going to be assessed for binding to the target proteins through the process of molecular docking and each of the target proteins has their respective job length, wherein the job length varies depending on a plurality of docking parameters;
a memory; and
a processor in communication with the memory, the processor further comprising:
a selection module for selecting a molecular docking tool for the purpose of docking of the target proteins;
a migration module for migrating the molecular docking tool from a high performance computing (HPC) cluster to a Hadoop cluster;
a container tool for containerizing the input and the migrated molecular docking tool;
a policy module for running a policy engine to identify the target proteins that qualify to move to a public cloud based on the job length;
a cloud bursting module for bursting the containerized input and the molecular docking tool to a public cluster based on the identification of the target proteins by the policy engine; and
a molecular docking module for running the molecular docking tool on the public cloud to carry out binding studies.

Dated this 20 day of December 2018
, Description:TECHNICAL FIELD

[001] The embodiments herein generally relates to the field of molecular docking. More particularly, but not specifically, the invention provides a system and method for carrying out molecular docking on a converged infrastructure.

BACKGROUND

[002] Molecular docking is one of the most frequently used methods in structure-based drug design, used to short list hits due to its ability to predict the binding-conformation of small molecule ligands to the target binding site. To selects hits or prospect drugs the pharmaceutical industry conducts the exercise of virtual screening where molecular docking is performed using millions of compounds against targets of interest. The hits that are short listed are then taken ahead in the drug discovery pipeline for lead identification. Characterization of the binding behavior plays an important role in rational design of drugs as well as to elucidate fundamental biochemical processes.
[003] Virtual screening is the first step in shortlisting prospect drugs where selection is based on binding affinity between the target and the small molecule or compounds. It is a compute intense process and takes a lot of time as millions of compounds are screened against each target of interest. Therefore molecular docking applications such as Autodock are executed on classic HPC cluster using message passing interface (MPI), interconnects and parallel file systems (PFS) that are proprietary and complex in nature. Moreover, creating similar environment as classic HPC clusters which use bare metal instances is challenging as most of the public cloud providers have virtual compute instances.
[004] In addition to that, the whole docking process is very compute intensive, which require lot of CPU processing power in terms of high speed and more number of CPU required to run the application in parallel. Traditional HPC clusters use propriety components such as InfiniBand etc. which are very expensive and requires very specific skillset. In addition to that various other parameters also need to be considered such as the containerization should be easy to adopt and affordable.

SUMMARY

[005] The following presents a simplified summary of some embodiments of the disclosure in order to provide a basic understanding of the embodiments. This summary is not an extensive overview of the embodiments. It is not intended to identify key/critical elements of the embodiments or to delineate the scope of the embodiments. Its sole purpose is to present some embodiments in a simplified form as a prelude to the more detailed description that is presented below.
[006] In view of the foregoing, an embodiment herein provides a system for carrying out binding studies on a converged infrastructure. The system comprises of an input module, a memory and a processor in communication with the memory. The input module provides the target protein and compounds to be docked as an input, wherein each small molecule/ compound is docked against the target proteins. Each molecular docking process has their respective job length depending upon the size of the compound used for docking and the parameters selected by the user. The processor further comprises a selection module, a migration module, a container tool, a policy module, a cloud bursting module and a molecular docking module. The selection module selects a molecular docking tool for the purpose of docking of the target proteins. The migration module migrates the molecular docking tool from a high performance computing (HPC) cluster to a Hadoop cluster. The container tool containerizes the input and the migrated molecular docking tool. The policy module runs a policy engine to identify the target proteins that qualify to move to a public cloud based on the job length. The cloud bursting module bursts the containerized input and the molecular docking tool to a public cluster based on the identification of the target proteins by the policy engine. The molecular docking module runs the molecular docking tool on the public cloud to carry out binding studies.
[007] In another aspect the embodiment here provides a method for carrying out binding studies on a converged infrastructure. Target proteins and small molecules/compounds to be screened are provided as an input, each compound is assessed for binding to the target through the process of molecular docking, where in the job length varies depending on a plurality of docking parameters. Further, a molecular docking tool is selected for the purpose of docking of the target proteins. In the next step, the molecular docking tool is migrated from a high performance computing (HPC) cluster to a Hadoop cluster. Further, the input and the migrated molecular docking tool is containerized using a container tool. In the next step, a policy engine is run to identify the target proteins that qualify to move to a public cloud based on the job length. Further, the containerized input and the molecular docking tool is burst to a public cluster based on the identification of the target proteins by the policy engine. And finally, the molecular docking tool is run on the public cloud to carry out binding studies.
[008] It should be appreciated by those skilled in the art that any block diagram herein represent conceptual views of illustrative systems embodying the principles of the present subject matter. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudo code, and the like represent various processes which may be substantially represented in computer readable medium and so executed by a computing device or processor, whether or not such computing device or processor is explicitly shown.

BRIEF DESCRIPTION OF THE DRAWINGS

[009] The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles.
[010] Fig. 1 illustrates a block diagram of a system for carrying out binding studies on a converged infrastructure according to an embodiment of the present disclosure;
[011] Fig. 2 shows an architectural diagram of the system for carrying out binding studies on the converged infrastructure according to an embodiment of the disclosure;
[012] Fig. 3 shows a flowchart illustrating the use of policy engine according to an embodiment of the disclosure;
[013] Fig. 4A-4B is a flowchart illustrating the steps involved in carrying out binding studies on a converged infrastructure according to an embodiment of the present disclosure; and
[014] Fig. 5 shows graphical representation of comparison of job runtime according to an embodiment of the disclosure.

DETAILED DESCRIPTION

[015] Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the spirit and scope of the disclosed embodiments. It is intended that the following detailed description be considered as exemplary only, with the true scope and spirit being indicated by the following claims.
[016] Referring now to the drawings, and more particularly to Fig. 1 through Fig. 5, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method.
[017] According to an embodiment of the disclosure, a system 100 for carrying out binding studies on a converged infrastructure is shown in the block diagram of Fig. 1. The converged infrastructure is used to run various application / workload on a shared infrastructure like HPC application and analytics workload on the same compute pool. The system 100 is configured to migrate life sciences applications in molecular docking area from classic HPC clusters to the Hadoop clusters. The migration is done with the help of creating containers with application and its environment to migrate from private to public cloud to address the peak infrastructure requirements.
[018] According to an embodiment of the disclosure, the system 100 is configured to operate in three phases as shown in the architectural flow diagram of Fig. 2. In the first phase, migration of life science application in molecular docking area from classic High Performance Computing (HPC) cluster to Hadoop cluster making it easily available, manageable and cost effective. In the second phase, containers are created with application and its environment to migrate from private to public to address the peak infrastructure requirements. And the third phase is the automated cloud bursting. In an example, the cloud bursting can be performed by the icBURST tool - the in-premises work load is moved to public cloud based on the decision by a policy engine using machine learning techniques.
[019] According to an embodiment of the disclosure, the system 100 further comprises an input module 102, a memory 104 and a processor 106 as shown in the block diagram of Fig. 1. The processor 106 works in communication with the memory 104. The processor 106 further comprises a plurality of modules. The plurality of modules accesses the set of algorithms stored in the memory 104 to perform certain functions. The processor 106 further comprises a selection module 108, a migration module 110, a container tool 112, a policy module 114, a cloud bursting module 116 and a molecular docking module 118.
[020] According to an embodiment of the disclosure the input module 102 is configured to provide target proteins and compounds to be screened as an input to the system 100. The compounds are going to be assessed for binding to the target proteins through the process of molecular docking and each of the target proteins has their respective job length, wherein the job length varies depending on a plurality of docking parameters. The job length is depending upon the size of the compound used for docking and the parameters selected by a user. The input module 102 can include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like and can facilitate multiple communications within a wide variety of networks N/W and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite.
[021] According to an embodiment of the disclosure, the processor 106 comprises the selection module 108. The selection module 108 is configured to select a molecular docking tool / application for the purpose of docking of the target proteins. In an example, an Autodock tool have been used as the molecular docking tool. It should be appreciated that the use of any other tool for the purpose of docking is well within the scope of this disclosure.
[022] The molecular docking tool or application, Autodock is embarrassingly parallel in nature and hence has minimal inter node communication. A detailed profile of the application was studied to conclude that Autodock application is data parallel in nature and hence, suitable for Hadoop clusters. Autodock application code is written in ‘C’, so it was decided to adopt the Hadoop streaming and HDFS-NFS configuration to avoid the Autodock code to be re-written for Hadoop environment.
[023] According to an embodiment of the disclosure, the processor 106 comprises the migration module 110. The migration module 110 is configured to migrate the molecular docking tool from a high performance computing (HPC) cluster to a Hadoop cluster. The use of Hadoop environment makes it easier to adopt containerization and move the application along with its environment to public cloud. To carry out detailed comparison between MPI based classic HPC cluster and Hadoop cluster two four node clusters with 32 CPU cores in total were setup with similar configuration. Classic HPC clusters require Infiniband as interconnect and parallel file system, in this case Lustre file system was used. Job run-time was measured for various input data sets on the both clusters to compare the run time between Hadoop and classic HPC cluster and it was found that the job run-time was almost similar.
[024] According to an embodiment of the disclosure, the processor 106 also comprises the container tool 112. The container tool 112 is configured to containerize the input target proteins and the migrated molecular docking tool. In an example, a Docker tool was used as the container tool 112. It should be appreciated that the use of any other container tool 112 is well within the scope of this disclosure. Literature study indicates that Docker as a containerization tool is widely accepted for Hadoop environment. Docker container was created with master and slave server images and moved to another server irrespective of the underlying architecture and operating system version. Autodock runs with various input data sets were executed on these containers running in master / slave or data-node / name-node mode. The job run-time was measured and found satisfactory as compared bare-metal Hadoop environment. Docker container enabled to move the Autodock application and its application environment to public cloud AWS Amazon web services. Job run-time was also measured on the AWS instance and found to be satisfactory.
[025] According to an embodiment of the disclosure, the processor 106 comprises the policy module 114. The policy module 114 is configured to run a policy engine to identify the target proteins that qualify to move to a public cloud based on the job length. The policy-engine was designed that follows a predefined flow chart 200 to mark the job that qualify to move to the public cloud as shown in the flowchart of Fig. 3. A key parameter of the policy engine is prediction of job lengths of the jobs that are pending in the queue. The job length is predicted using machine learning techniques such as – random forest, Lasso, KNN algorithms using historical data of various job run time. Very small/large jobs is a predefined value. Very small jobs will not burst as job runtime would be smaller than compute + data transfer time to the cloud. Very large jobs will not burst as they would be very costly from compute and data usage point. The policy engine have captured data of various jobs starting from 10 Ligand files to 12800 Ligand files. Along with the job length various cluster parameters like CPU, RAM and IO utilization ware also captured.
[026] As shown in the flow chart 200, initially at step 202, job qualification is done. Various parameters are captured such as input and output data size, whether it is High, Medium or Low. Further, Security information – allowed only in-premise, license requirement and cost is also captured. Workflow complexity is determined whether it is High, Medium or Low. In the next step 204, it was checked whether the job is cloud ready. If yes then at step 206, the policy engine is executed. If no then at step 208, jobs are executed in premise clusters. In the next step 210, it was checked whether bursting can be done or not. If yes then at step 212, job is burst on to the cloud. If no, then at step 208, jobs are executed in premise clusters.
[027] The policy engine identifies the target proteins that qualifies for move to the public cloud based on their job length and various other parameters. For example, the job is waiting for n seconds/time, where n is threshold value beyond which jobs will not be in pend state. Similarly, very small and very large jobs will not burst as mentioned above. Urgent jobs will have priority to burst out. The policy engine is also configured to make the consequence based decision making. The bursting of a particular job will be decided based on the impact on the in-premise queue. If the jobs in the in-premise cluster are about to complete then jobs in pend state will not burst until all the jobs slots are occupied in the in-premises queue. Based on the above parameters the policy engine will predict/speculate job bursting and transfer data for that particular job.
[028] According to an embodiment of the disclosure, the processor 106 further comprises the cloud bursting module 116. The cloud bursting module 116 is configured to burst the containerized input and the molecular docking tool to a public cluster based on the identification of the target proteins by the policy engine. In the present example the cloud bursting is performed by the icBURST tool.
[029] According to an embodiment of the disclosure, the processor 106 also comprises the molecular docking module 118. The molecular docking module 118 is configured to run the molecular docking tool on the public cloud to carry out binding studies.
[030] In operation, a flowchart 300 illustrating a method for carrying out binding studies on the converged infrastructure. Initially at step 302, the target proteins and compounds to be screened are provided as an input. The compounds are going to be assessed for binding to the target proteins through the process of molecular docking and each of the target proteins has their respective job length, wherein the job length varies depending on a plurality of docking parameters. The job length is predicted using various machine learning techniques such as random forest, Lasso, KNN algorithms using historical data of various job run time. Very small jobs will not burst as job runtime would be smaller than compute plus data transfer time to the cloud. Very large jobs will not burst as they would be very costly from compute and data usage point. At step 304, the molecular docking tool is selected for the purpose of docking of the target proteins. In an example, Autodock is selected. In the next step 306, the molecular docking tool is migrated from the high performance computing (HPC) cluster to the Hadoop cluster. The use of Hadoop cluster makes easily available, easily manageable and cost effective.
[031] In the next step 308, the input and the migrated molecular docking tool are containerized using the container tool. Docker has been used as the container tool in an example. In the next step 310, the policy engine is run to identify the target proteins that qualify to move to a public cloud based on the job length. At step 312, the containerized input and the molecular docking tool are burst to the public cluster based on the identification of the target proteins by the policy engine. And finally at step 314, the molecular docking tool is run on the public cloud to carry out binding studies.
[032] According to an embodiment of the disclosure, the system 100 can also be explained with the help experimental procedures and results. In the first phase, porting of the molecular docking application i.e. Autodock was done on the Hadoop by the following steps: Firstly, Hadoop is provided as streaming input. Secondly, Hadoop distributed file system (HDFS) - Network File System (NFS) gateway configuration is provided. This helped the data to be stored in HDFS file system and was exported to all other name-node via the HDFS NFS gateway. In the next step, hadoop.proxyuser.hadoop.groups and hadoop.proxyuser.hadoop.hosts are set to '' in core-site.xml of master server mainly for allowing master user to proxy all other users. Further, automated scripts are run to 'put' Autodock application data i.e. Ligand and Receptor files in the HDFS file system. In the next step, map only streaming jobs are run. Later each Ligand has one map associated with it running one CPU core in a Hadoop container. Finally, output is again stored to HDFS mount point via HDFS-NFS gateway.
[033] In the second phase, the containerization is done using the Docker tool as follows. Initially, a tar.gz file is created on source server with requisite directory structure and environment. The tar.gz file is then transferred to destination server to run Dockerized CentOS image containing Hadoop and application. At the same time, the destination server should have Docker rpm with Docker deamon running. In the next step, the tar.gz is imported to build a Docker image with modification if any. Finally, the container command is executed to launch the container.
[034] In the third phase, automated cloud bursting using icBURST is performed. Initially, the training data is prepared for various jobs having different sizes of input data sets and measure the job length. Based on the training data, the intermittent jobs lengths are predicted and jobs lengths beyond the maximum job length measured. In the next step, a flow chart is defined to check the pending jobs in the queue and based on the decision of the policy engine the job burst out to the cloud. In the next step, the qualified jobs are packaged in a Docker container and transferred to the public cloud. And finally, the policy engine also considers the consequences on the local queue due bursting and then takes the decision to burst out.
[035] Table I shows the test results of scalability runs comparing job runtime on Hadoop vs. MPI. The table shows scalability numbers for number of Ligand files starting from 10 to 12800 on Hadoop and MPI clusters. The runtime for the largest job of 12800 ligand files is 5.8% more on Hadoop than MPI cluster.

TABLE I – Scalability runs comparing job runtime
[036] The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims.
[037] The embodiments of present disclosure herein solves the problems of managing work load requirements and cost while performing molecular docking. The disclosure provides a method and system for carrying out binding studies on a converged infrastructure.
[038] It is to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g. any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g. hardware means like e.g. an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g. an ASIC and an FPGA, or at least one microprocessor and at least one memory with software modules located therein. Thus, the means can include both hardware means and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software means. Alternatively, the embodiments may be implemented on different hardware devices, e.g. using a plurality of CPUs.
[039] The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various modules described herein may be implemented in other modules or combinations of other modules. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
[040] The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
[041] Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
[042] It is intended that the disclosure and examples be considered as exemplary only, with a true scope and spirit of disclosed embodiments being indicated by the following claims.

Documents

Application Documents

# Name Date
1 201821048324-STATEMENT OF UNDERTAKING (FORM 3) [20-12-2018(online)].pdf 2018-12-20
2 201821048324-REQUEST FOR EXAMINATION (FORM-18) [20-12-2018(online)].pdf 2018-12-20
3 201821048324-FORM 18 [20-12-2018(online)].pdf 2018-12-20
4 201821048324-FORM 1 [20-12-2018(online)].pdf 2018-12-20
5 201821048324-FIGURE OF ABSTRACT [20-12-2018(online)].jpg 2018-12-20
6 201821048324-DRAWINGS [20-12-2018(online)].pdf 2018-12-20
7 201821048324-DECLARATION OF INVENTORSHIP (FORM 5) [20-12-2018(online)].pdf 2018-12-20
8 201821048324-COMPLETE SPECIFICATION [20-12-2018(online)].pdf 2018-12-20
9 201821048324-Proof of Right (MANDATORY) [21-12-2018(online)].pdf 2018-12-21
10 201821048324-FORM-26 [11-02-2019(online)].pdf 2019-02-11
11 Abstract1.jpg 2019-03-22
12 201821048324- ORIGINAL UR 6(1A) FORM 1-241218.pdf 2019-04-10
13 201821048324- ORIGINAL UR 6(1A) FORM 26-130219.pdf 2019-12-02
14 201821048324-FER.pdf 2020-07-28
15 201821048324-OTHERS [18-01-2021(online)].pdf 2021-01-18
16 201821048324-FER_SER_REPLY [18-01-2021(online)].pdf 2021-01-18
17 201821048324-DRAWING [18-01-2021(online)].pdf 2021-01-18
18 201821048324-COMPLETE SPECIFICATION [18-01-2021(online)].pdf 2021-01-18
19 201821048324-CLAIMS [18-01-2021(online)].pdf 2021-01-18
20 201821048324-US(14)-HearingNotice-(HearingDate-06-03-2024).pdf 2024-02-09
21 201821048324-FORM-26 [05-03-2024(online)].pdf 2024-03-05
22 201821048324-Correspondence to notify the Controller [05-03-2024(online)].pdf 2024-03-05
23 201821048324-Written submissions and relevant documents [18-03-2024(online)].pdf 2024-03-18
24 201821048324-PatentCertificate09-05-2024.pdf 2024-05-09
25 201821048324-IntimationOfGrant09-05-2024.pdf 2024-05-09

Search Strategy

1 SSE_16-07-2020.pdf

ERegister / Renewals

3rd: 17 May 2024

From 20/12/2020 - To 20/12/2021

4th: 17 May 2024

From 20/12/2021 - To 20/12/2022

5th: 17 May 2024

From 20/12/2022 - To 20/12/2023

6th: 17 May 2024

From 20/12/2023 - To 20/12/2024

7th: 19 Nov 2024

From 20/12/2024 - To 20/12/2025

8th: 20 Nov 2025

From 20/12/2025 - To 20/12/2026