Abstract: A system, method and apparatus for server aligned database client through selective scheduling are disclosed. The present invention provides a mechanism to reduce the client server communication overhead by reducing the data flow between the server and client process and by making sure that the data to be transferred is still in local cache of the processor. The present invention provides intercommunication of query processing between at least one database client process and at least one database server process residing in same or different machines, the system comprising orchestrated use of at least one shared cache and at least one thread migration process, and CHARACTERIZED IN THAT the database server and the database client are configured to read data from the shared cache, and comprises an operating system configured to provide an interface to restrict scheduling of a new process. (To be published with figure 4)
CLIAMS:
1. An apparatus for providing intercommunication of query processing between at least one database client process and at least one database server process comprising orchestrated use of at least one shared cache and at least one thread migration process CHARACTERIZED IN THAT the database server and the database client are configured to read data from the shared cache, and comprises an operating system configured to provide an interface to restrict scheduling of a new process.
2. The apparatus as claimed in claim 1, wherein the database client process is migrated to a same core as the database server process before a copy operation of a query result data is executed, and thereby enabling the data in the cache is read and processed by the database client process.
3. The apparatus as claimed in claims 1 and 2, wherein the interface enables to stop scheduling at least one other processes into the same core, and the scheduling is stopped till the client process completes reading the result data.
4. The apparatus as claimed in claim 1, wherein the shared cache comprises the data in chunk form, wherein the data to be transferred are chunked to smaller pieces so that each chunks fits into the shared cache.
5. The apparatus as claimed in any one of the above claims, wherein the operating system is configured to schedule the client process which starts the copy operation.
6. The apparatus as claimed in any one of the above claims, wherein at least one thread associated with the server process is configured to trigger the operating system with the process id to block other process except the server process associated with the process id to be scheduled on the corresponding core.
7. The apparatus as claimed in any one of the above claims, wherein, when the database server and the database client are configured to read data from the shared cache, the scheduling of other process to the same cache is prohibited thereby avoiding shared cache invalidation.
8. The apparatus as claimed in any one of the above claims, wherein the operating system is configured to provide a mechanism to freeze the scheduling, by at least one system call like freeze_schedule(int core_id).
9. The apparatus as claimed in any one of the above claims, wherein the operating system is configured to block scheduling on the core.
10. The apparatus as claimed in any one of the above claims, wherein the at least one database client process and the at least one database server process resides in the apparatus.
11. The apparatus as claimed in any one of the above claims, wherein the at least one database client process and the at least one database server process resides in separate apparatus.
12. A system for providing intercommunication of query processing between at least one database client process and at least one database server process residing in same or different machines, the system comprising orchestrated use of at least one shared cache and at least one thread migration process, and CHARACTERIZED IN THAT the database server and the database client are configured to read data from the shared cache, and comprises an operating system configured to provide an interface to restrict scheduling of a new process.
13. The system as claimed in claim 12, wherein the database client process is migrated to a same core as the database server process before a copy operation of data is executed, thereby enabling the data in the cache is read and processed by the database client process.
14. The system as claimed in claims 12 and 13, wherein the interface enable to stop scheduling at least one other processes into the same core.
15. The system as claimed in claim 12, wherein the shared cache comprises the data in chunk form, wherein the data to be transferred are chunked into smaller pieces so that each chunks fits into the shared cache.
16. The system as claimed in any one of the above claims, wherein the operating system is configured to schedule the client process which starts the copy operation.
17. The system as claimed in any one of the above claims, wherein at least one thread associated with the server process is configured to trigger the operating system with the process id.
18. The system as claimed in any one of the above claims, wherein, when the database server and the database client are configured to read data from the shared cache, the scheduling of other process to the same cache is prohibited thereby avoiding shared cache invalidation.
19. The system as claimed in any one of the above claims, wherein the operating system is configured to provide a mechanism to freeze the scheduling, by at least one system call like freeze_schedule(int core_id).
20. The system as claimed in any one of the above claims, wherein the operating system is configured to block scheduling on the core.
21. A method for providing intercommunication of query processing between at least one database client process and at least one database server process residing in same or different machines, the method comprising orchestrated use of at least one shared cache and at least one thread migration process, and CHARACTERIZED BY the database server and the database client reading data from the shared cache, and further comprises an operating system configured to provide an interface to restrict scheduling of a new process.
22. The method as claimed in claim 21, comprises migrating the database client process to a same core as the database server process before a copy operation of data is executed, thereby enabling the data in the cache is read and processed by the database client process.
23. The method as claimed in claims 21 and 22, comprises enabling, by using the interface, to stop scheduling at least one other processes into the same core.
24. The method as claimed in claim 21, wherein the shared cache comprises the data in chunk form, wherein the data to be transferred are chunked into smaller pieces so that each chunks fits into the shared cache.
25. The method as claimed in any one of the above claims, comprises scheduling, by the operating system, the client process which starts the copy operation.
26. The method as claimed in any one of the above claims, comprises triggering by the at least one thread associated with the server the operating system with the process id.
27. The method as claimed in any one of the above claims, wherein, when the database server and the database client are configured to read data from the shared cache, the scheduling of other process to the same cache is prohibited thereby avoiding shared cache invalidation.
28. The method as claimed in any one of the above claims, comprises providing, by the operating system, a mechanism to freeze the scheduling, by at least one system call like freeze_schedule(int core_id).
29. The method as claimed in any one of the above claims, wherein the operating system is configured to block scheduling on the core.
,TagSPECI:
TECHNIAL FIELD
The present subject matter described herein, in general, relates to processing within a computing environment and database client-server communication where the client and server are located in a single machine, and more particularly, the invention relates to a system, method and apparatus to reduce the client server communication overhead.
BACKGROUND
A database system is generally used to answer queries requesting information from the database stored. A query may be defined as a logical expression over the data and the data relationships expressed in the database, and results in the identification of a subset of the database. In the recent advancements, database systems enable a single query execution to be run in parallel.
The present invention focuses on the client server communication in a database system. Any traditional database management system (DBMS) has two parts a client and a server. These parts interact through a queue. This queue could be over the network (like a socket queue) or in a single machine (over an inter process communication (IPC) channel).
The present invention relates to a database client-server communication where the client and server are located in a single machine. Considering a scenario (as shown in figure 1, a memory hierarchy in a x86 system), we observe the following technology/system changes happening:
1. As the size of the processing system increases, the time to access a memory location is also increasing. With a very large system with 100s of GB of physical memory, distributed in many asymmetrically designed memory location, the cost of access a memory location is in order of times costlier than access data from local memory cache.
2. As per experimental data, the cost of memory access from a remote location is in order of 10-15 times of that memory access from L1 cache of a processor. This cost increases with increase in the size of the system and the size of physical memory.
3. So it is always advisable to make use of the data present in the local cache.
4. The technology has developed very fast and very efficient algorithms and methods are being designed to improve the execution of query. But very little has been done to improve the efficiency of client server communication.
5. So there is a need to design a system which can make use of co-location of client and server process and removes the bottleneck of client server communication overhead.
In client server database deployment model, the client sends to request to the server over IPC and server processes the request and sends back the result back to the client over same channel.
In most telecom scenarios the client and server are collocated in the same machine. The separation between both is done to protect the server from client errors. In this case, shared memory or operating system IPC mechanism is used instead of network communication.
In the case where the server and client are collocated in the same system, the communication of data flows is same as the system where the client and server are in different system.
In collocated system also, the data from server to client is copied a memory location and then its moved to the system space and back to user space during copy by the client.
Further, as per the conventional approach, in case the client and server are in same machine usually shared memory is used for communication between the client and server. But it is never taken care of that the client and server are in same machine so they can make use of cache locality. So in most of the cases, the server writes the data to RAM using shared memory and send the address to the client. Client reads the data from the same memory. If the client and server are attached to different processors, then the data from RAM will again be loaded to cache and then is processed by client. In case, the client and server are attached to cores in different NUMA node, there will be remote read and thus the copy takes longer.
Hence, the major problem in the existing client server model is that in the case of co-located client-server systems, the performance improvement opportunities are very limited. Further, the techniques available in the prior art does not take advantage of the memory hierarchy present in the current processors and systems.
Thus, there is a need to provide an efficient mechanism wherein this inefficiency mentioned can be reduced. Further, the mechanism during solving the above mentioned issues must also take into consideration the process scheduling and operations and handle them efficiently.
SUMMARY
This summary is provided to introduce concepts related to a system, method and apparatus for faster query execution by optimizing index in a database management system (DBMS)are further described below in the detailed description. This summary is not intended to identify essential features of the claimed subject matter nor is it intended for use in determining or limiting the scope of the claimed subject matter.
One aspect of the present invention is to provide a mechanism to reduce the client server communication overhead by making use of the fact that they resides on the same system.
Another aspect of the present invention is to provide a mechanism to increase the cache efficiency of the copy of data from server to client and vice versa.
Another aspect of the present invention is to provide a mechanism to pinning the core of client (or server) thread to same as the core where data is generated automatically through operating system (OS) modifications.
Another aspect of the present invention is to provide a mechanism to avoid the scheduling of other process while the data transfer is in progress to avoid cache misses.
Another aspect of the present invention is to provide a mechanism wherein a server data output is configured to fit into the L1/L2 cache sizes of a system.
Accordingly, in one implementation, the present invention provides an apparatus for providing intercommunication of query processing between at least one database client process and at least one database server process comprising orchestrated use of at least one shared cache and at least one thread migration process CHARACTERIZED IN THAT the database server and the database client are configured to read data from the shared cache, and comprises an operating system configured to provide an interface to restrict scheduling of a new process.
In one implementation, the present invention provides a system for providing intercommunication of query processing between at least one database client process and at least one database server process residing in same or different machines, the system comprising orchestrated use of at least one shared cache and at least one thread migration process, and CHARACTERIZED IN THAT the database server and the database client are configured to read data from the shared cache, and comprises an operating system configured to provide an interface to restrict scheduling of a new process.
In one implementation, the present invention provides a method for providing intercommunication of query processing between at least one database client process and at least one database server process residing in same or different machines, the method comprising orchestrated use of at least one shared cache and at least one thread migration process, and CHARACTERIZED BY the database server and the database client reading data from the shared cache, and further comprises an operating system configured to provide an interface to restrict scheduling of a new process.
The present invention provides a mechanism wherein the intercommunication of query processing between a pair of a database client and database server process is achieve by the orchestrated use of shared cache and thread migration process where in the database server and database client that can read data from the shared CPU cache (namely the L1, L2 caches).
In one implementation, as per the present invention, the client is migrated to the same core as the server before copy. This makes sure that the data in the cache is read and processed by the client and thus results into efficient data copy. The data to be transferred are chunks to smaller pieces so that each chunk can fit into the system cache.
BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWINGS
The detailed description is described with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the drawings to refer like features and components.
Figure 1 illustrates a memory hierarchy in a x86 system of the prior art.
Figure 2 illustrates generations of client-server communication, in accordance with an embodiment of the present subject matter.
Figure 3illustratesdata access flow comparison of the present invention with the prior art approach, in accordance with an embodiment of the present subject matter.
Figure 4illustratesthe process flow as per the present invention, in accordance with an embodiment of the present subject matter.
DETAILED DESCRIPTION OF THE PRESENT INVENTION
The following clearly describes the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Apparently, the described embodiments are merely a part rather than all of the embodiments of the present invention. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present invention without creative efforts shall fall within the protection scope of the present invention.
The invention can be implemented in numerous ways, including as a process, an apparatus, a system, a composition of matter, a computer readable medium such as a computer readable storage medium or a computer network wherein program instructions are sent over optical or electronic communication links. In this specification, these implementations, or any other form that the invention may take, may be referred to as techniques. In general, the order of the steps of disclosed processes may be altered within the scope of the invention.
A detailed description of one or more embodiments of the invention is provided below along with accompanying figures that illustrate the principles of the invention. The invention is described in connection with such embodiments, but the invention is not limited to any embodiment. The scope of the invention is limited only by the claims and the invention encompasses numerous alternatives, modifications and equivalents. Numerous specific details are set forth in the following description in order to provide a thorough understanding of the invention. These details are provided for the purpose of example and the invention may be practiced according to the claims without some or all of these specific details. For the purpose of clarity, technical material that is known in the technical fields related to the invention has not been described in detail so that the invention is not unnecessarily obscured.
Systems, methods and apparatus to reduce the client server communication overhead by making use of the fact that they resides on the same system are disclosed.
While aspects are described for server aligned database client through selective scheduling may be implemented in any number of different computing systems, environments, and/or configurations, the embodiments are described in the context of the following exemplary systems, apparatus, and methods.
The present invention discloses a system and method to reduce the client server communication overhead by reducing the data flow between the server and client process and by making sure that the data to be transferred is still in local cache of the processor.
Referring now to figure 2,the invention aims to generate the next generation of shared memory communication. Various generations of the client server communication are shown in the figure2.
Referring now to figure 3 shows, data access flow comparison of the present invention with the prior art approach. As shown in figure 3, in case the client and server are in same machine usually shared memory is used for communication between the client and server. But it is never taken care of that the client and server are in same machine so they can make use of cache locality. So in most of the cases, the server writes the data to RAM using shared memory and send the address to the client. Client reads the data from the same memory. If the client and server are attached to different processors, then the data from RAM will again be loaded to cache and then is processed by client. In case, the client and server are attached to cores in different NUMA node, there will be remote read and thus the copy takes longer. As per the present invention, the client is migrated to the same core as the server before copy. This migration is achieved by invoking one of the standard APIs offered by the operating system. An example of such an API offered by Linux is sched_setaffinity().This makes sure that the data in the cache is read and processed by the client and thus results into efficient data copy. The data to be transferred are chunks to smaller pieces so that each chunks can fit into the system cache.
In order to achieve the technical solution provided in the present invention, there are two problems to be solved in this method:
1. How to make other processes do not invalidate the cache before client process can read it?
2. How to make client process read the data which is chunked into smaller pieces?
The above mentioned problems are addressed through changes in the scheduler of operating system. The operating system can be changed to provide an interface which can be used to stop scheduling other processes into the same core. The server thread can trigger the operating system with the process id. The operating system can schedule the client process which starts the copy, which is explained in the description of the figure 4 below.
Referring now to figure 4 shows process flow as per the present invention. As shown in the figure 4, the client connects to the server. The client. As a part of the connection process the client submits its process id and thread id. These can be obtained trivially by the client using OS APIs like getpid() and pthread_self(). The server after obtaining this information records this data in its memory. After this initial connection process the client at any later point of time submits a query to the server. When the server receives this query it verifies if the query has come from a client collocated on the same machine. This can be found from the connection properties of the client namely the IP address. The server after processing the request places the result in its L2 cache. At this point of time the server invokes an OS scheduler barrier API like stop_sched. This barrier API is a part of this invention. Once this barrier is invoked the server sends the memory pointer of the result memory to the client. This can be done by either using a message queue or through socket communication. Once the data send is done the server will invoke an active schedule switch API of the operating system. This API is a part of this invention which tells the OS that paired thread of the current thread should be scheduled to receive messages. The OS identifies the corresponding thread using the PID-TID combination. The affinity of this thread is switched to the current core by using standard affinity APIs like the sched_setaffinity. Then the client thread is scheduled to run. The client thread then picks up the data pointed by the memory pointer received by it.
The present invention enables the operating system to make the scheduler change. There are two steps involved in the scheduler change.
Step 1 is for the OS to offer a mechanism to freeze the scheduling. This is done by a new system call like freeze_schedule(int core_id).
Step 2 is for the OS to block scheduling on the core. Generally the OS uses available a red-black tree to schedule processes. It uses all cores for scheduling. We just stop the barred core from being used for scheduling.
In one implementation, the present invention provides an apparatus for providing intercommunication of query processing between at least one database client process and at least one database server process comprising orchestrated use of at least one shared cache and at least one thread migration process CHARACTERIZED IN THAT the database server and the database client are configured to read data from the shared cache, and comprises an operating system configured to provide an interface to restrict scheduling of a new process.
In one implementation, the present invention provides a system for providing intercommunication of query processing between at least one database client process and at least one database server process residing in same or different machines, the system comprising orchestrated use of at least one shared cache and at least one thread migration process, and CHARACTERIZED IN THAT the database server and the database client are configured to read data from the shared cache, and comprises an operating system configured to provide an interface to restrict scheduling of a new process.
In one implementation, the database client process is migrated to a same core as the database server process before a copy operation of data is executed, and thereby enabling the data in the cache is read and processed by the database client process.
In one implementation, the interface enable to stop scheduling at least one other processes into the same core.
In one implementation, the shared cache comprises the data in chunk form, wherein the data to be transferred are chunked to smaller pieces so that each chunks fits into the shared cache.
In one implementation, the operating system is configured to schedule the client process which starts the copy operation.
In one implementation, at least one thread associated with the server process is configured to trigger the operating system with the process id.
In one implementation, when the database server and the database client are configured to read data from the shared cache, the scheduling of other process to the same cache is prohibited thereby avoiding shared cache invalidation.
In one implementation, the operating system is configured to provide a mechanism to freeze the scheduling, by at least one system call like freeze_schedule(int core_id).
In one implementation, the operating system is configured to block scheduling on the core.
In one implementation, the at least one database client process and the at least one database server process resides in the apparatus.
In one implementation, the at least one database client process and the at least one database server process resides in separate apparatus.
In one implementation, the apparatus and the system is communicably coupled with the user devices / database client systems (not shown). Although the present subject matter is explained considering that the apparatus and the system is implemented as a separate computing unit it may be understood that the apparatus and the system may also be implemented on a server, in a variety of computing systems, such as a laptop computer, a desktop computer, a notebook, a workstation, a mainframe computer, a server, a network server, and the like. It will be understood that the apparatus and the system may be accessed by multiple users through one or more user devices/client systems collectively referred to as user systems hereinafter, or applications residing on the user devices (not shown). Examples of the user devices may include, but are not limited to, a portable computer, a personal digital assistant, a handheld device, and a workstation. The user devices are communicatively coupled to the apparatus and the system through a network (not shown).
In one implementation, the network may be a wireless network, a wired network or a combination thereof. The network can be implemented as one of the different types of networks, such as intranet, local area network (LAN), wide area network (WAN), the internet, and the like. The network may either be a dedicated network or a shared network. The shared network represents an association of the different types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), Wireless Application Protocol (WAP), and the like, to communicate with one another. Further the network may include a variety of network devices, including routers, bridges, servers, computing devices, storage devices, and the like.
In one implementation, the apparatus may include the processor which may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor is configured to fetch and execute computer-readable instructions stored in the memory.
The interface may be provided to the apparatus and the system may include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like. The interface may allow the client systems/users to interact with a user directly or through the apparatus and the system. Further, the interface may enable the apparatus to communicate with other computing devices, such as web servers and external data servers (not shown). The interface can facilitate multiple communications within a wide variety of networks and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. The interface may include one or more ports for connecting a number of devices to one another or to another server.
The memory may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. The memory may include at least one query compiler configured to prepare an execution plan in a tree structure, with a plurality of plan nodes, for the database query received. It shall be noted that the query compiler is a conventional compiler and the execution plan generation done in the tradition / convention approaches as available in the prior-art.
In one implementation, the present invention provides a method for providing intercommunication of query processing between at least one database client process and at least one database server process residing in same or different machines, the method comprising orchestrated use of at least one shared cache and at least one thread migration process, and CHARACTERIZED BY the database server and the database client reading data from the shared cache, and further comprises an operating system configured to provide an interface to restrict scheduling of a new process.
The method may be performed by the apparatus in the system or by the system itself. The method may be described in the general context of computer executable instructions. Generally, computer executable instructions can include routines, programs, objects, components, data structures, procedures, modules, functions, etc., that perform particular functions or implement particular abstract data types. The method may also be practiced in a distributed computing environment where functions are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, computer executable instructions may be located in both local and remote computer storage media, including memory storage devices.
The order in which the method is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method or alternate methods. Additionally, individual blocks may be deleted from the method without departing from the scope of the subject matter described herein. Furthermore, the method can be implemented in any suitable hardware, software, firmware, or combination thereof. However, for ease of explanation, in the embodiments described below, the method may be considered to be implemented in the above described apparatus and/or system.
In one implementation, the method comprises, migrating the database client process to a same core as the database server process before a copy operation of data is executed, thereby enabling the data in the cache is read and processed by the database client process.
In one implementation, the method comprises, enabling, by using the interface, to stop scheduling at least one other processes into the same core.
In one implementation, the shared cache comprises the data in chunk form, wherein the data to be transferred are chunked into smaller pieces so that each chunks fits into the shared cache.
In one implementation, the method comprises, scheduling, by the operating system, the client process which starts the copy operation.
In one implementation, the method comprises, triggering by the at least one thread associated with the server the operating system with the process id.
In one implementation, when the database server and the database client are configured to read data from the shared cache, the scheduling of other process to the same cache is prohibited thereby avoiding shared cache invalidation.
In one implementation, the method comprises, providing, by the operating system, a mechanism to freeze the scheduling, by at least one system call like freeze_schedule(int core_id).
In one implementation, the operating system is configured to block scheduling on the core.
Exemplary embodiments discussed above may provide certain advantages. Though not required to practice aspects of the disclosure, these advantages may include:
1. The mechanism disclosed in the present invention improves the performance of the client server communication improves by an order of magnitude.
2. The mechanism improves the overall load on the server due to reduced copy effort on the system.
3. The mechanism improves the usage of cores in the system.
4. The mechanism uses the metadata for point searches to find duplicates in a very fast manner.
5. The mechanism enables Usage of the cache for transferring data between two processes residing in the same system by migrating the copying thread to the same as sender thread.
6. The mechanism enables when data transfer is in progress the scheduling of other process to the same cache line is prohibited to avoid cache invalidation.
7. In case of system where there are multiple receiver and sender, this method will improve the performance even more by avoiding cache corruption done by competing threads.
8. The mechanism enables the data to be transferred are spitted into smaller chunks so that each chunk can fit into the processors cache.
9. The mechanism provides a computer-based method for interposes communication between two processes residing in the same system by thread migration to allow the receiver thread to read data from local cache.
10. The mechanism consists of prohibiting the operating system to schedule other process to the same core and thus avoiding cache invalidation.
11. The mechanism consists of the operating system offering a method to schedule a paired process (client à server, server à client) on the same core.
12. The mechanism consists of making the client process to schedule in the same core for reading the data.
13. The mechanism consists of splitting the data to be transfer to fit into the cache to avoid cache invalidation by same copy.
Although implementations for a system, method and apparatus for server aligned database client through selective scheduling have been described in language specific to structural features and/or methods, it is to be understood that the appended claims are not necessarily limited to the specific features or methods described. Rather, the specific features and methods are disclosed as examples of implementations of a system, method and apparatus for server aligned database client through selective scheduling.
| # | Name | Date |
|---|---|---|
| 1 | 1605-CHE-2015-Response to office action [14-02-2025(online)].pdf | 2025-02-14 |
| 1 | GPA of Huawei Technologies India Pvt. Ltd..pdf | 2015-04-13 |
| 2 | 1605-CHE-2015-WithDrawalLetter.pdf | 2018-02-15 |
| 2 | FORM 3.pdf | 2015-04-13 |
| 3 | 1605-CHE-2015-RELEVANT DOCUMENTS [12-02-2018(online)].pdf | 2018-02-12 |
| 3 | FORM 2 & Complete Specification.pdf | 2015-04-13 |
| 4 | 1605-CHE-2015-Correspondence-280915.pdf | 2016-03-28 |
| 4 | Drawings.pdf | 2015-04-13 |
| 5 | abstract 1605-CHE-2015.jpg | 2015-08-28 |
| 5 | 1605-CHE-2015-Form 1-280915.pdf | 2016-03-28 |
| 6 | 1605-CHE-2015-Form 1-280915.pdf | 2016-03-28 |
| 6 | abstract 1605-CHE-2015.jpg | 2015-08-28 |
| 7 | 1605-CHE-2015-Correspondence-280915.pdf | 2016-03-28 |
| 7 | Drawings.pdf | 2015-04-13 |
| 8 | 1605-CHE-2015-RELEVANT DOCUMENTS [12-02-2018(online)].pdf | 2018-02-12 |
| 8 | FORM 2 & Complete Specification.pdf | 2015-04-13 |
| 9 | 1605-CHE-2015-WithDrawalLetter.pdf | 2018-02-15 |
| 9 | FORM 3.pdf | 2015-04-13 |
| 10 | GPA of Huawei Technologies India Pvt. Ltd..pdf | 2015-04-13 |
| 10 | 1605-CHE-2015-Response to office action [14-02-2025(online)].pdf | 2025-02-14 |