Sign In to Follow Application
View All Documents & Correspondence

A Distributed In Memory Cache Based Computing Network

Abstract: ABSTRACT A distributed in-memory cache based computing network. This invention relates to providing an effective means of handling big data, and more particularly to handling big data by providing distributed storage of most important data in the Random Access Memory (RAM), whereas the actual data is persisted in a distributed, scalable, BigData store. FIG. 3

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
07 April 2014
Publication Number
01/2016
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

TOSHIBA SOFTWARE INDIA (PVT) LTD
#3A, "ESSAE VAISHNAVI SOLITAIRE", 3RD BLOCK, KORAMANGALA, BANGALORE - 560 034

Inventors

1. RAMU MALUR SRINIVASAREDDY
TOSHIBA SOFTWARE INDIA (PVT) LTD., #3A, "ESSAE VAISHNAVI SOLITAIRE", 3RD BLOCK, KORAMANGALA, BANGALORE - 560 034

Specification

FIELD OF INVENTION
[001] This invention relates to providing an effective means of handling big data, and more particularly to handling big data by providing distributed storage of most important data in the Random Access Memory (RAM), whereas the actual data is persisted in a distributed, scalable, big data store.
BACKGROUND OF INVENTION
[002] Currently, Hadoop frameworks, a distributed, scalable, big data store are used for a variety of applications, wherein the Hadoop framework provides for hosting of very large tables of data atop clusters of commodity hardware. Such databases are NoSQL databases, more specifically a distributed key-value data store. NoSQL databases may be used when random, real-time read/write access to big data is required.
[003] Further, distributed, versioned, column-oriented stores such as Apache HBase, Google BigTable are available which use distributed data storage models.
[004] Big data products like Hadoop, Apache HBase and so on offer cost effective storage for huge amounts of data (usually in the range of Petabytes), but they lack capabilities to fully utilize the capacity of the RAM (Random Access Memory) of the network/cluster with which the data is associated. The big data products also do not offer in-memory processing as they were not designed for such use cases and they are primarily meant to be deployed on commodity hardware, where the RAM may be comparatively low.
[005] For many applications, the most frequently used data is usually the latest data (as an example) that is generated from their business processes. Having this data in RAM will help them analyze the data in quick and efficient manner and with very low latencies.

[006] Big data applications are primarily designed for random read/write access to big data with predictable latencies, wherein the latencies may be in the range of a few milliseconds. Typically, many applications attempt to speed up processing by caching data on a local machine, e.g., in connection with the machine's heap. This may result in the cache being made as large as possible.
[007] Garbage collection involves determining which objects can no longer be referenced by an application, and then reclaiming the memory used by "dead" objects (the garbage). But complexities arise in determining when, for how long, and how often, garbage collection activities are to take place, and this work directly impacts the performance and determinism of running applications. So, applications that execute on garbage-collected runtimes face an increasing challenge to handle the ever-increasing amounts of data and leverage the fast-growing amount of RAM on modern computer systems.
[008] As a result of increasing the size of the cache for garbage-collected runtimes is that with the large heaps needed for large caches, Java-based environments slowdown at an exponential rate with much, if not all, of the slowdown being directly attributable to Java's garbage collection. A heap size of 2-4 gigabytes (GB) oftentimes is manageable, and some further amount can be considered usable if specialized modifications are made. But custom modifications may be time consuming and technically challenging.
[009] Typically, there is a 6-8 GB limit to Java heaps, although slowdowns frequently occur well before this maximum is ever reached. Slowdowns can halt all or substantially all processes that are executing. For large heaps, it is not uncommon to observe a 10 second delay in which nothing happens, although minute-long delays are not unheard of. These

sorts of delays can be particularly problematic for web services, mission critical applications, and/or the like.
[0010] Challenges result from the increasing garbage collection pauses or delays that occur as runtime heaps become larger and larger. These delays may be unpredictable in length and in occurrence. Thus, as the data/memory explosion is occurring, the amount of the heap a garbage-collected runtime process can effectively use has stayed largely unchanged. In other words, although the amount of space available is growing, it oftentimes is challenging to use it in an efficient and cost-effective way.
[0011] These problems manifest themselves in several ways and can be caused in several common scenarios. A first problem relates to applications running too slowly. For example, an application might not be able to keep up with the users (e.g., with 10s of GBs of data in a database, the application may be overloaded and/or too slow to service the needs of users), which may be caused by the complicated nature of queries, the volume of those queries, and/or the like. Caching may help by moving data "closer" to the application, but too many Java garbage collection pauses may be incurred if the cache is grown too large (e.g., to approximate the 16 GB of RAM in a hypothetical system).
[0012] Another common problem relates to unpredictable latencies that can affect the application. An application might be sufficiently fast on average, but many pauses that deviate from the mean may be unacceptable to my users. Service Level Agreements (SLAs) may not be met because of the size of my heap, combined with Java garbage collection pauses.
[0013] Consider the example of Apache HBase which has an inbuilt cache, called block cache, which is used to store the most frequently used data. The block cache uses JVM (Java Virtual Machine) to store the blocks and is on heap. The size of this cache is usually configured as a percentage of the Region Server's heap, wherein the Region Server is a Java Process

and is usually configured with few GB's of heap space. This often limits the total cache size to around 30-40% of the total heap space in the cluster. This results in the cache size being a few GBs (In an example, consider that the maximum heap space is 8 GB, then 2 GB of the total heap space (8 GB) is reserved for the cache). This limit is often too small for big data application that needs real-time/near real time processing. Typically, when configured with higher heap sizes, wherein the cache is of the order of 8 GB, JVM will suffer from garbage collection pauses, when the heap is overloaded. This may lead to DoS (Denial of Service), which means JVM is so busy cleaning up cache that it has no time for serving user requests. Also, block cache may also suffer from fragmentation of heap, when not all records in block are required.
[0014] Solutions which offer an in memory (off heap) cache on top of big data systems are available, wherein such solutions recommend using SSD (Sold State Drive) as the storage means. However, with SSDs being very expensive, obtaining large amounts of storage works to a very high cost (especially when the requirement is in the terms of Peta Bytes).
OBJECT OF INVENTION
[0015] The principal object of this invention is to propose a method and system for handling BigData by providing distributed storage of most important data in the Random Access Memory (RAM), whereas the actual data is persisted in a distributed, scalable, big data store.
STATEMENT OF INVENTION
[0016] Accordingly the invention provides a method for handling big data in a computing network, the computing network comprising of at least one server; wherein the server further comprises of a big data database, an in-memory database and a cache manager, the method

comprising of intercepting requests from a client in the computing network to at least one of the big data database and the in-memory database by the cache manager; and performing at least one operation by the cache manager in response to the request.
[0017] Also, provided herein is a server present in a computing network configured for handling big data, the server further comprises of a big data database, an in-memory database and a cache manager, the ache manager further configured for intercepting requests from a client in the computing network to at least one of the big data database and the in-memory database; andperforming at least one operation in response to the request.
[0018] These and other aspects of the embodiments herein will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings. It should be understood, however, that the following descriptions, while indicating preferred embodiments and numerous specific details thereof, are given by way of illustration and not of limitation. Many changes and modifications may be made within the scope of the embodiments herein without departing from the spirit thereof, and the embodiments herein include all such modifications.
BRIEF DESCRIPTION OF FIGURES
[0019] This invention is illustrated in the accompanying drawings, through out which like reference letters indicate corresponding parts in the various figures. The embodiments herein will be better understood from the following description with reference to the drawings, in which:
[0020] FIG. 1 depicts a computing network comprising of at least one clients and at least one server interconnected using a network, according to embodiments as disclosed herein;

[0021] FIG. 2 depicts a server, according to embodiments as disclosed herein;
[0022] FIG. 3 is a flowchart illustrating a process of a cache manager handling a request from a client, according to embodiments as disclosed herein;
[0023] FIG. 4 is a flowchart illustrating a process of a cache manager enabling a client to make an Upsert operation in a write through cache, according to embodiments as disclosed herein;
[0024] FIG. 5 is a flowchart illustrating a process of a cache manager enabling a client to make an Upsert operation in a read through cache, according to embodiments as disclosed herein;
[0025] FIG. 6 is a flowchart illustrating a process of a cache manager enabling a client to make a read operation, according to embodiments as disclosed herein; and
[0026] FIG. 7 illustrates a computing environment implementing the method for handling big data by providing distributed storage of most important data in the Random Access Memory (RAM), whereas the actual data is persisted in a distributed, scalable, big data store, according to embodiments as disclosed herein.

DETAILED DESCRIPTION OF INVENTION
[0027] The embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
[0028] The embodiments herein achieve a method and system for handling big data by providing distributed storage of most important data in the Random Access Memory (RAM) (in-memory database), whereas the actual data is persisted in a distributed, scalable, big data store. Referring now to the drawings, and more particularly to FIGS. 1 through 7, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments.
[0029] The terms 'database', 'data base', 'data store' and 'datastore' have been used interchangeably herein and the afore mentioned terms have been used to denote a database, according to embodiments as disclosed herein.
[0030] The terms 'in-memory database' and 'cache' have been used interchangeably herein and these terms may be construed to denote an 'in-memory database', according to embodiments as disclosed herein.
[0031 ] FIG. 1 depicts a computing network comprising of at least one clients and at least one server interconnected using a network, according to embodiments as disclosed herein. The computing network, as

depicted, comprises of at least one server 101 and at least one client 102. The client 102 may be an electronic device being used by an authorized user, an automated electronic device or any other electronic device authorized to access the server 101. The electronic device may be a computer, a laptop, a mobile computing device, Personal Digital Assistant (PDA), a smart phone or any other device that may enable an authorized user to access the server 101. In an embodiment herein, there may be more than one server 101 present and connected to the computing network. The server 101 may comprise of big data and the big data may be distributed over at least one server 101, present in the computing network. The server 101 may be a part of a distributed server cluster. The server 101 and the client 102 may be connected to each other through a network 103. The network 103 may be at least one of a Local Area Network (LAN), a Wide Area Network (WAN), a Virtual Private Network (VPN) or any other network that may enable a client to connect to the server 101. In an embodiment herein, the network 103 may be the internet.
[0032] The client 102 may make a request for performing an operation on a set of data present in at least one server 101, through the network 103. Examples of the operation may be at least one of adding a new set of data, removing an existing set of data, editing an existing set of data, searching an existing set of data, scanning an existing set of data and so on.
[0033] FIG. 2 depicts a server, according to embodiments as disclosed herein. The server 101, as depicted, comprises of a cache manager 201, an in-memory database 202 and an HBase database 203. The cache manager 201 intercepts all requests received by the server 101 and interfaces with at least one of the in-memory database 202 and the HBase database 203 to serve the received requests. The in-memory database 202 is a cache database present within the server 101. The in-memory database

202 may be a distributed, scalable NoSQL database which manages a set of data (called Row), each consisting of a Key and Value. The in-memory database 202 performs in-memory data management, allowing high-speed processing. In an embodiment herein, the in-memory database 202 may be a Random Access memory (RAM). The HBase database 203 is used herein as an example and may be replaced by any other scalable, distributed big data database.
[0034] The cache manager 201 receives requests from the client 102. The cache manager 201 may check if the client 102 is authorized to access the set of data and has the requisite set of permissions necessary to perform the operation. If the client 102 does not have the requisite permissions, the cache manager 201 informs of the failure to the client 102. The cache manager 201 may also inform the client 102 of the reason for the failure (absence of the requisite permissions necessary to perform the requested operation).
[0035] For example, the requesting client 102 may be requesting for deleting a set of data. The cache manager 201, on receiving the request for deleting the data set, checks if the requesting client 102 has the permission to access the set of data and delete the set of data. If the requesting client 102 has the permission to access the set of data and delete the set of data, the cache manager 201 proceeds further. If the requesting client 102 does not have the permission to delete the set of data, the cache manager 201 informs the requesting client 102 of the failure. The cache manager 201 may also inform the requesting client that reason for the failure to delete the set of data was the lack of permission to the requesting client 102 to delete the set of data.
[0036] The cache manager 201, on checking that the requesting client 102 has the requisite permission, proceeds further with the operation. For example, if the operation is a read operation, the cache manager 201

checks if the set of data to be read is present in the in-memory database 202. If the set of data is present in the in-memory database 202, the cache manager 201 fetches the set of data from the in-memory database 202 and makes it available to the requesting client 102. If the set of data is not available in the in-memory database 202, the cache manager 201 fetches the data from the HBase database 203 and makes it available to the requesting client 102. The cache manager 201 may update the in-memory database 202 with the data fetched from the HBase database 203.
[0037] The cache manager 201 comprises of a data structure, wherein the data structure comprises of information such as location where a set of data is present (in the in-memory database 202 or in the HBase database 203) and so on. The data structure may comprise of information related to operations carried out by the cache manager 201.
[0038] The cache manager 201 may also provide an interface for an administrator to configure the cache manager 201. The administrator may also use the interface to configure the server 101.
[0039] The cache manager 201 may also enable the administrator to specify the size of the cache. The cache manager 201 may enable the administrator to specify the size of the cache in terms of 'bytes/number of records'.
[0040] The cache manager 201 may also enable the administrator to specify read/write through sizes. The cache manager 201 may also enable the administrator to specify read/write through sizes in terms of a percentage.
[0041] The cache manager 201 may also enable the administrator to define cache policies. The cache manager 201 may also enable the administrator to define cache policies per table and per column family. The cache manager 201 may also enable the administrator to define cache policies for read through cache and/or write through cache.

[0042] The cache manager 201 may also enable the administrator to specify compression of data present in the HBase database 203 and/or the in-memory database 202. The cache manager 201 may also enable the administrator to specify compression of data present in the HBase database 203 and/or the in-memory database 202 on at least one condition being satisfied, wherein the condition may be at least one of a time based trigger (after a pre-configured interval or at a pre-configured time), state of the empty space/occupied space in the databases 202, 203 (a pre-defined amount (which may be in terms of a percentage) of the databases 202, 203 is occupied) and so on.
[0043] The cache manager 201 may also enable the administrator to define eviction policies for the in-memory database 203. This ensures that the in-memory database 202 is not overloaded with details that are not required to be maintained. The cache manager 201 may also enable the administrator to define eviction policies at each table and the column family level of the HBase database 203. The cache manager 201 may also enable the administrator to define at least one eviction policy such as LRU (Least Recently Used), LFU (Least Frequently Used), FIFO (First In first Out), time based (data older than a pre-defined amount of time) which may be used. The cache manager 201 may also enable the administrator to define more than one eviction policy which may be used based on at least one predefined condition. The cache manager 201 may also enable the administrator to specify at least one trigger for triggering the eviction policies. The trigger may be at least one of a time based trigger (after a pre-configured interval or at a pre-configured time), state of the empty space/occupied space in the in-memory database 202 (a pre-defined amount (which may be in terms of a percentage) of the in-memory database 202 is occupied), a forceful eviction (by directly invoking the evict operation by the administrator, either manually through execution of a program or

automatically through an event generated from an authorized client 102) and so on. The cache manager 201 may update the data structure, based on the eviction.
[0044] FIG. 3 is a flowchart illustrating a process of a cache manager handling a request from a client, according to embodiments as disclosed herein. The client 102 makes (301) a request for performing an operation on a set of data present in at least one server 101, through the network 103. The cache manager 201 receives (302) the request from the client 102. The cache manager 201 checks (303) if the client 102 is authorized to access the set of data and has the requisite set of permissions necessary to perform the operation. If the client 102 does not have the requisite permissions, the cache manager 201 informs (304) of the failure to the client 102. The cache manager 201 may also inform the client 102 of the reason for the failure (absence of the requisite permissions necessary to perform the requested operation). The cache manager 201, on checking that the requesting client 102 has the requisite permission, proceeds (305) further with the operation. The various actions in method 300 may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some actions listed in FIG. 3 may be omitted.
[0045] FIG. 4 is a flowchart illustrating a process of a cache manager enabling a client to make an Upsert operation in a write through cache, according to embodiments as disclosed herein. The client 102 makes (401) an Upsert command, Upsert (K,V) to a server 101, through the network 103. 'Upsert' means an update or an insert operation, wherein data present in the server 101 may be updated or data may be inserted into the server 101. The cache manager 201 receives (402) the Upsert command from the client 102. The cache manager 201 checks (403) if the client 102 is authorized to perform the Upsert operation. If the client 102 is not

authorized to perform the Upsert operation, the cache manager 201 informs (404) of the failure to the client 102. The cache manager 201, on checking that the requesting client is authorized to perform the Upsert operation, Upserts (405) (K,V) in the in-memory database 202 and updates (406) the data structure (indicating that (K,V) was Upserted into the in-memory database 202). The cache manager 201 further Upserts (407) (K,V) in the HBase database 203. The various actions in method 400 may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some actions listed in FIG. 4 may be omitted.
[0046] FIG. 5 is a flowchart illustrating a process of a cache manager enabling a client to make an Upsert operation in a read through cache, according to embodiments as disclosed herein. The client 102 makes (501) an Upsert command, Upsert (K,V) to a server 101, through the network 103. The cache manager 201 receives (502) the Upsert command from the client 102. The cache manager 201 checks (503) if the client 102 is authorized to perform the Upsert operation. If the client 102 is not authorized to perform the Upsert operation, the cache manager 201 informs (504) of the failure to the client 102. The cache manager 201, on checking that the requesting client is authorized to perform the Upsert operation, Upserts (505) (K,V) in the HBase database 203. The various actions in method 500 may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some actions listed in FIG. 5 may be omitted.
[0047] FIG. 6 is a flowchart illustrating a process of a cache manager enabling a client to make a read operation, according to embodiments as disclosed herein. The client 102 makes (601) a read command, read (K,V) to a server 101, through the network 103. The cache manager 201 receives (602) the read command from the client 102. The cache manager 201 checks (603) if the client 102 is authorized to perform

the read operation. If the client 102 is not authorized to perform the read operation, the cache manager 201 informs (604) of the failure to the client 102. The cache manager 201, on checking that the requesting client is authorized to perform the read operation, checks (605) if the data structure comprises information that (K,V) is present in the in-memory database 202. The information may be present in the in-memory database 202 as a result of a previous Upsert operation. The information may be present in the in-memory database 202 as a result of a previous read operation. If the data structure comprises information that (K,V) is present in the in-memory database 202, the cache manager 201 fetches (606) (K,V) from the in-memory database 202 and returns (607) (K,V) to the client 102. . If the data structure comprises information that (K,V) is not present in the in-memory database 202, the cache manager 201 fetches (608) (K,V) from the HBase database 203 and returns (608) (K,V) to the client 102. The cache manager further stores (609) (k,v) in the in-memory database 202 and updates (610) the data structure (indicating that (K,V) is present in the in-memory database 202). The cache manager 201 stores (611) (k,v) in the in-memory database 202. The various actions in method 600 may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some actions listed in FIG. 6 may be omitted.
[0048] FIG. 7 illustrates a computing environment implementing the method for handling big data by providing distributed storage of most important data in the Random Access Memory (RAM), whereas the actual data is persisted in a distributed, scalable, big data store, according to embodiments as disclosed herein. As depicted the computing environment 701 comprises at least one processing unit 704 that is equipped with a control unit 702 and an Arithmetic Logic Unit (ALU) 703, a memory 705, a storage unit 706, plurality of networking devices 708 and a plurality Input

output (I/O) devices 707. The processing unit 704 is responsible for processing the instructions of the algorithm. The processing unit 704 receives commands from the control unit in order to perform its processing. Further, any logical and arithmetic operations involved in the execution of the instructions are computed with the help of the ALU 703.
[0049] The overall computing environment 701 can be composed of multiple homogeneous and/or heterogeneous cores, multiple CPUs of different kinds, special media and other accelerators. The processing unit 704 is responsible for processing the instructions of the algorithm. Further, the plurality of processing units 704 may be located on a single chip or over multiple chips.
[0050] The algorithm comprising of instructions and codes required for the implementation are stored in either the memory unit 705 or the storage 706 or both. At the time of execution, the instructions may be fetched from the corresponding memory 705 and/or storage 706, and executed by the processing unit 704.
[0051] In case of any hardware implementations various networking devices 708 or external I/O devices 707 may be connected to the computing environment to support the implementation through the networking unit and the I/O device unit.
[0052] Embodiments disclosed herein use a distributed, scalable NoSQL database as cache (the in-memory database 202) and does not store in-memory data in a Java Heap, hereby avoiding issues related to garbage collection.
[0053] Embodiments disclosed herein enable performing complete in-memory processing can be performed on the data by the cache manager 201 and the in-memory database 202. This in turn increases efficiency of the computing network and enables the clients to take real-time informed decisions based on the data.

[0054] Embodiments disclosed herein do not require additional hardware to maintain the cache (the in-memory database 202). Further, failure of the cache layer will not cause any Denial-Of-Service, as the details are persisted in the HBase database 203 and can be served from the HBase database 203 at any point of time.
[0055] The embodiments disclosed herein can be implemented through at least one software program running on at least one hardware device and performing network management functions to control the network elements. The network elements shown in Figs. 1 and 2 include blocks which can be at least one of a hardware device, or a combination of hardware device and software module.
[0056] The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the embodiments as described herein.

STATEMENT OF CLAIMS
We claim:
1 .A method for handling big data in a computing network, the computing network comprising of at least one server; wherein the server further comprises of a big data database, an in-memory database and a cache manager, the method comprising of
intercepting requests from a client in the computing network to at least one of the big data database and the in-memory database by the cache manager; and
performing at least one operation by the cache manager in response to the request.
2.The method, as claimed in claim 1, wherein the method further comprises of the cache manager checking if the client is authorized to perform the at least one operation, before performing the at least one operation by the cache manager.
3.The method, as claimed in claim 1, wherein the cache manager performs the at least one operation in co-operation with at least one of the big data database and the in-memory database.
4.The method, as claimed in claim 1, wherein the method further comprises of the cache manager storing information related to the at least one operation in a data structure.
5.The method, as claimed in claim 1, wherein the method further comprises of the cache manager administering the in-memory database. 6.A server present in a computing network configured for handling big data, the server further comprises of a big data database, an in-memory database and a cache manager, the ache manager further configured for
intercepting requests from a client in the computing network to at least one of the big data database and the in-memory database; and
performing at least one operation in response to the request.

7.The server, as claimed in claim 6, wherein the cache manager is further
configured for checking if the client is authorized to perform the at least
one operation, before performing the at least one operation by the cache
manager.
8.The server, as claimed in claim 6, wherein the cache manager is further
configured for performing the at least one operation in co-operation with at
least one of the big data database and the in-memory database.
9.The server, as claimed in claim 6, wherein the cache manager is further
configured for storing information related to the at least one operation in a
data structure.
10. The server, as claimed in claim 6, wherein the cache manager is further
configured for administering the in-memory database.

Documents

Application Documents

# Name Date
1 1837-CHE-2014 POWER OF ATTORNEY 07-04-2014.pdf 2014-04-07
2 1837-CHE-2014 OTHERS 07-04-2014.pdf 2014-04-07
3 1837-CHE-2014 FORM-5 07-04-2014.pdf 2014-04-07
4 1837-CHE-2014 FORM-3 07-04-2014.pdf 2014-04-07
5 1837-CHE-2014 FORM-2 07-04-2014.pdf 2014-04-07
6 1837-CHE-2014 FORM-1 07-04-2014.pdf 2014-04-07
7 1837-CHE-2014 DRAWINGS 07-04-2014.pdf 2014-04-07
8 1837-CHE-2014 DESCRIPTION (COMPLETE) 07-04-2014.pdf 2014-04-07
9 1837-CHE-2014 CORRESPONDENCE OTHERS 07-04-2014.pdf 2014-04-07
10 1837-CHE-2014 CLAIMS 07-04-2014.pdf 2014-04-07
11 1837-CHE-2014 ABSTRACT 07-04-2014.pdf 2014-04-07
12 1837-CHE-2014-Other Patent Document-090216.pdf 2016-06-29
13 1837-CHE-2014-Form 18-090216.pdf 2016-06-29
14 1837-CHE-2014-FER.pdf 2020-02-25

Search Strategy

1 searchstrategy_21-02-2020.pdf