Abstract: An apparatus to facilitate processing in a multi-tile device is disclosed. The apparatus comprises a plurality of processing tiles, each including a memory device and a plurality of processing resources, coupled to the device memory, and a memory management unit to manage the memory devices in each of the plurality of tiles to perform allocation of memory resources among the memory devices for execution by the plurality of processing resources.
Claims:1. An apparatus to facilitate processing in a multi-tile device, comprising:
a plurality of processing tiles, each including:
a memory device; and
a plurality of processing resources, coupled to the device memory; and
a memory management unit to manage the memory device in each of the plurality of tiles to perform allocation of memory resources among the memory devices for execution by the plurality of processing resources.
2. The apparatus of claim 1, wherein the memory management unit replicates a copy of a memory resource shared at each of the memory devices.
3. The apparatus of claim 2, wherein the memory management unit includes a page table associated with each memory device in the plurality of tiles.
4. The apparatus of claim 3, wherein each page table includes a table entry a first virtual address associated with the shared memory resource.
5. The apparatus of claim 4, wherein each page table stores a different physical address associated with the shared memory resource.
6. The apparatus of claim 1, wherein the memory management unit distributes the memory resources among the memory devices based on memory access characteristics.
7. The apparatus of claim 1, wherein the memory management unit distributes the memory resources by assigning a contiguous virtual address range shared by the memory devices.
8. The apparatus of claim 1, wherein the plurality of processing resourcesare synchronized.
9. A method to facilitate processing in a multi-tile device, comprising:
receiving a workload to be processed at a graphics processing unit including a plurality of processing tiles;
generating a plurality of virtual partitions to process the workload; and
scheduling the plurality of virtual partitions for execution at a plurality of processing resources included in each of the plurality of processing tiles.
10. The method of claim 9, wherein a first virtual partition is executed at a first plurality of resources at a first processing tile and a second virtual partition is executed at a second plurality of resources at a second processing tile.
11. The method of claim 10, further comprising synchronizing the first virtual partition and the second virtual partition.
12. The method of claim 11, further comprising generating a command buffer upon receiving the workload.
13. The method of claim 12, wherein the virtual partitions comprises a plurality of dispatch parameters.
14. The method of claim 13, wherein the dispatch parameters comprise a global work size, a local work size and a work group count.
15. At least one computer readable medium having instructions, which when executed by one or more processors, causes the processors to:
receive a workload to be processed at a graphics processing unit including a plurality of processing tiles;
generate a plurality of virtual partitions to process the workload; and
schedule the plurality of virtual partitions for execution at a plurality of processing resources included in each of the plurality of processing tiles.
16. A graphics processing unit (GPU), comprising:
a plurality of processing tiles, each including:
a memory device; and
a plurality of processing resources, coupled to the device memory;
an interface coupled between the plurality of processing tiles; and
a memory management unit to manage the memory devices in each of the plurality of tiles to perform allocation of memory resources among the memory devices for execution by the plurality of processing resources.
17. The GPU of claim 16, wherein the memory management unit replicates a copy of a memory resource shared at each of the memory devices.
18. The GPU of claim 17, wherein the memory management unit includes a page table associated with each memory device in the plurality of tiles.
19. The GPU of claim 18, wherein each page table includes a table entry a first virtual address and a different physical address associated with the shared memory resource.
, Description:RELATED APPLICATION
[0001] This application claims priority to United States Provisional Patent Application No. 16/951,217, filed on November 18, 2020, entitled MULTI-TILE GRAPHICS PROCESSING UNIT, the disclosure of which is hereby incorporated by reference.
BACKGROUND
[0002] Graphics processing units (GPUs) are highly threaded machines in which hundreds of threads of a program are executed in parallel to achieve high throughput.
BRIEF DESCRIPTION OF THE DRAWINGS
[0003] So that the manner in which the above recited features of the present embodiments can be understood in detail, a more particular description of the embodiments, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments and are therefore not to be considered limiting of its scope.
[0004] Figure 1 is a block diagram of a processing system, according to an embodiment;
[0005] Figures 2A-2D illustrate computing systems and graphics processors provided by embodiments described herein;
[0006] Figures 3A-3C illustrate block diagrams of additional graphics processor and compute accelerator architectures provided by embodiments;
[0007] Figure 4 is a block diagram of a graphics processing engine of a graphics processor in accordance with some embodiments;
[0008] Figures 5A-5B illustrate thread execution logic including an array of processing elements employed in a graphics processor core according to embodiments;
[0009] Figure 6 illustrates an additional execution unit, according to an embodiment;
[0010] Figure 7 is a block diagram illustrating a graphics processor instruction formats according to some embodiments;
[0011] Figure 8 is a block diagram of a graphics processor according to another embodiment;
[0012] Figure 9A & 9B illustrate a graphics processor command format and command sequence, according to some embodiments;
[0013] Figure 10 illustrates exemplary graphics software architecture for a data processing system according to some embodiments;
[0014] Figures 11A-11D illustrate an integrated circuit package assembly, according to an embodiment;
[0015] Figure 12 is a block diagram illustrating an exemplary system on a chip integrated circuit, according to an embodiment;
[0016] Figures 13A & 13B is a block diagram illustrating an additional exemplary graphics processor;
[0017] Figure 14 illustrates a computing device according to one embodiment;
[0018] Figure 15 illustrates one embodiment of a graphics processing unit;
[0019] Figure 16 illustrates another embodiment of a graphics processing unit;
[0020] Figure 17 illustrates one embodiment of a memory view;
[0021] Figure 18 illustrates anotherembodiment of a memory view;
[0022] Figure 19 illustrates one embodiment of a cross engine synchronization;
[0023] Figure 20 illustrates one embodiment of a virtual partitioning;
[0024] Figure 21 is a flow diagram illustrating one embodiment of a workload distribution process; and
[0025] Figure 22 is a flow diagram illustrating one embodiment of a synchronization process.
DETAILED DESCRIPTION
In embodiments,a graphics processing unit includes a plurality of tiles, with each tile including a memory device and a plurality of processing resources. The graphics processing unit also includes a memory management unit to manage the memory devices in each of the plurality of tiles to perform allocation of memory resources among the memory devices for execution by the plurality of processing resources.In other embodiments, virtual partitions are generated to schedule execution of workloads at the plurality of tiles.
System Overview
[0026] Figure 1 is a block diagram of a processing system 100, according to an embodiment. System 100 may be used in a single processor desktop system, a multiprocessor workstation system, or a server system having a large number of processors 102 or processor cores 107. In one embodiment, the system 100 is a processing platform incorporated within a system-on-a-chip (SoC) integrated circuit for use in mobile, handheld, or embedded devices such as within Internet-of-things (IoT) devices with wired or wireless connectivity to a local or wide area network.
[0027] In one embodiment, system 100 can include, couple with, or be integrated within: a server-based gaming platform; a game console, including a game and media console; a mobile gaming console, a handheld game console, or an online game console. In some embodiments the system 100 is part of a mobile phone, smart phone, tablet computing device or mobile Internet-connected device such as a laptop with low internal storage capacity. Processing system 100 can also include, couple with, or be integrated within: a wearable device, such as a smart watch wearable device; smart eyewear or clothing enhanced with augmented reality (AR) or virtual reality (VR) features to provide visual, audio or tactile outputs to supplement real world visual, audio or tactile experiences or otherwise provide text, audio, graphics, video, holographic images or video, or tactile feedback; other augmented reality (AR) device; or other virtual reality (VR) device. In some embodiments, the processing system 100 includes or is part of a television or set top box device.In one embodiment, system 100 can include, couple with, or be integrated within a self-driving vehicle such as a bus, tractor trailer, car, motor or electric power cycle, plane or glider (or any combination thereof). The self-driving vehicle may use system 100 to process the environment sensed around the vehicle.
[0028] In some embodiments, the one or more processors 102 each include one or more processor cores 107 to process instructions which, when executed, perform operations for system or user software. In some embodiments, at least one of the one or more processor cores 107 is configured to process a specific instruction set 109. In some embodiments, instruction set 109 may facilitate Complex Instruction Set Computing (CISC), Reduced Instruction Set Computing (RISC), or computing via a Very Long Instruction Word (VLIW). One or more processor cores 107 may process a different instruction set 109, which may include instructions to facilitate the emulation of other instruction sets. Processor core 107 may also include other processing devices, such as a Digital Signal Processor (DSP).
[0029] In some embodiments, the processor 102 includes cache memory 104. Depending on the architecture, the processor 102 can have a single internal cache or multiple levels of internal cache. In some embodiments, the cache memory is shared among various components of the processor 102. In some embodiments, the processor 102 also uses an external cache (e.g., a Level-3 (L3) cache or Last Level Cache (LLC)) (not shown), which may be shared among processor cores 107 using known cache coherency techniques. A register file 106 can be additionally included in processor 102 and may include different types of registers for storing different types of data (e.g., integer registers, floating point registers, status registers, and an instruction pointer register). Some registers may be general-purpose registers, while other registers may be specific to the design of the processor 102.
[0030] In some embodiments, one or more processor(s) 102 are coupled with one or more interface bus(es) 110 to transmit communication signals such as address, data, or control signals between processor 102 and other components in the system 100. The interface bus 110, in one embodiment, can be a processor bus, such as a version of the Direct Media Interface (DMI) bus. However, processor busses are not limited to the DMI bus, and may include one or more Peripheral Component Interconnect buses (e.g., PCI, PCI express), memory busses, or other types of interface busses.In one embodiment the processor(s) 102 include an integrated memory controller 116 and a platform controller hub 130. The memory controller 116 facilitates communication between a memory device and other components of the system 100, while the platform controller hub (PCH) 130 provides connections to I/O devices via a local I/O bus.
[0031] The memory device 120 can be a dynamic random-access memory (DRAM) device, a static random-access memory (SRAM) device, flash memory device, phase-change memory device, or some other memory device having suitable performance to serve as process memory. In one embodiment the memory device 120 can operate as system memory for the system 100, to store data 122 and instructions 121 for use when the one or more processors 102 executes an application or process. Memory controller 116 also couples with an optional external graphics processor 118, which may communicate with the one or more graphics processors 108 in processors 102 to perform graphics and media operations. In some embodiments, graphics, media, and or compute operations may be assisted by an accelerator 112 which is a coprocessor that can be configured to perform a specialized set of graphics, media, or compute operations. For example, in one embodiment the accelerator 112 is a matrix multiplication accelerator used to optimize machine learning or compute operations. In one embodiment the accelerator 112 is a ray-tracing accelerator that can be used to perform ray-tracing operations in concert with the graphics processor 108. In one embodiment, an external accelerator 119 may be used in place of or in concert with the accelerator 112.
[0032] In some embodiments a display device 111 can connect to the processor(s) 102. The display device 111 can be one or more of an internal display device, as in a mobile electronic device or a laptop device or an external display device attached via a display interface (e.g., DisplayPort, etc.). In one embodiment the display device 111 can be a head mounted display (HMD) such as a stereoscopic display device for use in virtual reality (VR) applications or augmented reality (AR) applications.
[0033] In some embodiments the platform controller hub 130 enables peripherals to connect to memory device 120 and processor 102 via a high-speed I/O bus. The I/O peripherals include, but are not limited to, an audio controller 146, a network controller 134, a firmware interface 128, a wireless transceiver 126, touch sensors 125, a data storage device 124 (e.g., non-volatile memory, volatile memory, hard disk drive, flash memory, NAND, 3D NAND, 3D XPoint, etc.). The data storage device 124 can connect via a storage interface (e.g., SATA) or via a peripheral bus, such as a Peripheral Component Interconnect bus (e.g., PCI, PCI express). The touch sensors 125 can include touch screen sensors, pressure sensors, or fingerprint sensors. The wireless transceiver 126 can be a Wi-Fi transceiver, a Bluetooth transceiver, or a mobile network transceiver such as a 3G, 4G, 5G, or Long-Term Evolution (LTE) transceiver. The firmware interface 128 enables communication with system firmware, and can be, for example, a unified extensible firmware interface (UEFI). The network controller 134 can enable a network connection to a wired network. In some embodiments, a high-performance network controller (not shown) couples with the interface bus 110. The audio controller 146, in one embodiment, is a multi-channel high definition audio controller. In one embodiment the system 100 includes an optional legacy I/O controller 140 for coupling legacy (e.g., Personal System 2 (PS/2)) devices to the system. The platform controller hub 130 can also connect to one or more Universal Serial Bus (USB) controllers 142 connect input devices, such as keyboard and mouse 143 combinations, a camera 144, or other USB input devices.
[0034] It will be appreciated that the system 100 shown is exemplary and not limiting, as other types of data processing systems that are differently configured may also be used. For example, an instance of the memory controller 116 and platform controller hub 130 may be integrated into a discreet external graphics processor, such as the external graphics processor 118. In one embodiment the platform controller hub 130 and/or memory controller 116 may be external to the one or more processor(s) 102. For example, the system 100 can include an external memory controller 116 and platform controller hub 130, which may be configured as a memory controller hub and peripheral controller hub within a system chipset that is in communication with the processor(s) 102.
[0035] For example, circuit boards (“sleds”) can be used on which components such as CPUs, memory, and other components are placed are designed for increased thermal performance. In some examples, processing components such as the processors are located on a top side of a sled while near memory, such as DIMMs, are located on a bottom side of the sled. As a result of the enhanced airflow provided by this design, the components may operate at higher frequencies and power levels than in typical systems, thereby increasing performance. Furthermore, the sleds are configured to blindly mate with power and data communication cables in a rack, thereby enhancing their ability to be quickly removed, upgraded, reinstalled, and/or replaced. Similarly, individual components located on the sleds, such as processors, accelerators, memory, and data storage drives, are configured to be easily upgraded due to their increased spacing from each other. In the illustrative embodiment, the components additionally include hardware attestation features to prove their authenticity.
[0036] A data center can utilize a single network architecture (“fabric”) that supports multiple other network architectures including Ethernet and Omni-Path. The sleds can be coupled to switches via optical fibers, which provide higher bandwidth and lower latency than typical twisted pair cabling (e.g., Category 5, Category 5e, Category 6, etc.). Due to the high bandwidth, low latency interconnections and network architecture, the data center may, in use, pool resources, such as memory, accelerators (e.g., GPUs, graphics accelerators, FPGAs, ASICs, neural network and/or artificial intelligence accelerators, etc.), and data storage drives that are physically disaggregated, and provide them to compute resources (e.g., processors) on an as needed basis, enabling the compute resources to access the pooled resources as if they were local.
[0037] A power supply or source can provide voltage and/or current to system 100 or any component or system described herein. In one example, the power supply includes an AC to DC (alternating current to direct current) adapter to plug into a wall outlet. Such AC power can be renewable energy (e.g., solar power) power source. In one example, power source includes a DC power source, such as an external AC to DC converter. In one example, power source or power supply includes wireless charging hardware to charge via proximity to a charging field. In one example, power source can include an internal battery, alternating current supply, motion-based power supply, solar power supply, or fuel cell source.
[0038] Figures 2A-2D illustrate computing systems and graphics processors provided by embodiments described herein. The elements ofFigs. 2A-2D having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such.
[0039] Figure 2A is a block diagram of an embodiment of a processor 200 having one or more processor cores 202A-202N, an integrated memory controller 214, and an integrated graphics processor 208. Processor 200 can include additional cores up to and including additional core 202N represented by the dashed lined boxes. Each of processor cores 202A-202N includes one or more internal cache units 204A-204N. In some embodiments each processor core also has access to one or more shared cached units 206. The internal cache units 204A-204N and shared cache units 206 represent a cache memory hierarchy within the processor 200. The cache memory hierarchy may include at least one level of instruction and data cache within each processor core and one or more levels of shared mid-level cache, such as a Level 2 (L2), Level 3 (L3), Level 4 (L4), or other levels of cache, where the highest level of cache before external memory is classified as the LLC. In some embodiments, cache coherency logic maintains coherency between the various cache units 206 and 204A-204N.
[0040] In some embodiments, processor 200 may also include a set of one or more bus controller units 216 and a system agent core 210. The one or more bus controller units 216 manage a set of peripheral buses, such as one or more PCI or PCI express busses. System agent core 210 provides management functionality for the various processor components. In some embodiments, system agent core 210 includes one or more integrated memory controllers 214 to manage access to various external memory devices (not shown).
[0041] In some embodiments, one or more of the processor cores 202A-202N include support for simultaneous multi-threading. In such embodiment, the system agent core 210 includes components for coordinating and operating cores 202A-202N during multi-threaded processing. System agent core 210 may additionally include a power control unit (PCU), which includes logic and components to regulate the power state of processor cores 202A-202N and graphics processor 208.
[0042] In some embodiments, processor 200 additionally includes graphics processor 208 to execute graphics processing operations. In some embodiments, the graphics processor 208 couples with the set of shared cache units 206, and the system agent core 210, including the one or more integrated memory controllers 214. In some embodiments, the system agent core 210 also includes a display controller 211 to drive graphics processor output to one or more coupled displays. In some embodiments, display controller 211 may also be a separate module coupled with the graphics processor via at least one interconnect, or may be integrated within the graphics processor 208.
[0043] In some embodiments, a ring-based interconnect unit 212 is used to couple the internal components of the processor 200. However, an alternative interconnect unit may be used, such as a point-to-point interconnect, a switched interconnect, or other techniques, including techniques well known in the art. In some embodiments, graphics processor 208 couples with the ring interconnect 212 via an I/O link 213.
[0044] The exemplary I/O link 213 represents at least one of multiple varieties of I/O interconnects, including an on package I/O interconnect which facilitates communication between various processor components and a high-performance embedded memory module 218, such as an eDRAM module. In some embodiments, each of the processor cores 202A-202N and graphics processor 208 can use embedded memory modules 218 as a shared Last Level Cache.
[0045] In some embodiments, processor cores 202A-202N are homogenous cores executing the same instruction set architecture. In another embodiment, processor cores 202A-202N are heterogeneous in terms of instruction set architecture (ISA), where one or more of processor cores 202A-202N execute a first instruction set, while at least one of the other cores executes a subset of the first instruction set or a different instruction set. In one embodiment, processor cores 202A-202N are heterogeneous in terms of microarchitecture, where one or more cores having a relatively higher power consumption couple with one or more power cores having a lower power consumption. In one embodiment, processor cores 202A-202N are heterogeneous in terms of computational capability. Additionally, processor 200 can be implemented on one or more chips or as an SoC integrated circuit having the illustrated components, in addition to other components.
[0046] Figure 2B is a block diagram of hardware logic of a graphics processor core 219, according to some embodiments described herein. Elements of Figure 2B having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such. The graphics processor core 219, sometimes referred to as a core slice, can be one or multiple graphics cores within a modular graphics processor. The graphics processor core 219 is exemplary of one graphics core slice, and a graphics processor as described herein may include multiple graphics core slices based on target power and performance envelopes. Each graphics processor core 219 can include a fixed function block 230 coupled with multiple sub-cores 221A-221F, also referred to as sub-slices, that include modular blocks of general-purpose and fixed function logic.
[0047] In some embodiments, the fixed function block 230 includes a geometry/fixed function pipeline 231 that can be shared by all sub-cores in the graphics processor core 219, for example, in lower performance and/or lower power graphics processor implementations. In various embodiments, the geometry/fixed function pipeline 231 includes a 3D fixed function pipeline (e.g., 3D pipeline 312 as in Figure 3 and Figure 4, described below) a video front-end unit, a thread spawner and thread dispatcher, and a unified return buffer manager, which manages unified return buffers (e.g., unified return buffer 418 in Figure 4, as described below).
[0048] In one embodiment the fixed function block 230 also includes a graphics SoC interface 232, a graphics microcontroller 233, and a media pipeline 234. The graphics SoC interface 232 provides an interface between the graphics processor core 219 and other processor cores within a system on a chip integrated circuit. The graphics microcontroller 233 is a programmable sub-processor that is configurable to manage various functions of the graphics processor core 219, including thread dispatch, scheduling, and pre-emption. The media pipeline 234 (e.g., media pipeline 316 of Figure 3 and Figure 4) includes logic to facilitate the decoding, encoding, pre-processing, and/or post-processing of multimedia data, including image and video data. The media pipeline 234 implement media operations via requests to compute or sampling logic within the sub-cores 221-221F.
[0049] In one embodiment the SoC interface 232 enables the graphics processor core 219 to communicate with general-purpose application processor cores (e.g., CPUs) and/or other components within an SoC, including memory hierarchy elements such as a shared last level cache memory, the system RAM, and/or embedded on-chip or on-package DRAM. The SoC interface 232 can also enable communication with fixed function devices within the SoC, such as camera imaging pipelines, and enables the use of and/or implements global memory atomics that may be shared between the graphics processor core 219 and CPUs within the SoC. The SoC interface 232 can also implement power management controls for the graphics processor core 219 and enable an interface between a clock domain of the graphic core 219 and other clock domains within the SoC. In one embodiment the SoC interface 232 enables receipt of command buffers from a command streamer and global thread dispatcher that are configured to provide commands and instructions to each of one or more graphics cores within a graphics processor. The commands and instructions can be dispatched to the media pipeline 234, when media operations are to be performed, or a geometry and fixed function pipeline (e.g., geometry and fixed function pipeline 231, geometry and fixed function pipeline 237) when graphics processing operations are to be performed.
[0050] The graphics microcontroller 233 can be configured to perform various scheduling and management tasks for the graphics processor core 219. In one embodiment the graphics microcontroller 233 can perform graphics and/or compute workload scheduling on the various graphics parallel engines within execution unit (EU) arrays 222A-222F, 224A-224F within the sub-cores 221A-221F. In this scheduling model, host software executing on a CPU core of an SoC including the graphics processor core 219 can submit workloads one of multiple graphic processor doorbells, which invokes a scheduling operation on the appropriate graphics engine. Scheduling operations include determining which workload to run next, submitting a workload to a command streamer, pre-empting existing workloads running on an engine, monitoring progress of a workload, and notifying host software when a workload is complete. In one embodiment the graphics microcontroller 233 can also facilitate low-power or idle states for the graphics processor core 219, providing the graphics processor core 219 with the ability to save and restore registers within the graphics processor core 219 across low-power state transitions independently from the operating system and/or graphics driver software on the system.
[0051] The graphics processor core 219 may have greater than or fewer than the illustrated sub-cores 221A-221F, up to N modular sub-cores. For each set of N sub-cores, the graphics processor core 219 can also include shared function logic 235, shared and/or cache memory 236, a geometry/fixed function pipeline 237, as well as additional fixed function logic 238 to accelerate various graphics and compute processing operations. The shared function logic 235 can include logic units associated with the shared function logic 420 of Figure 4 (e.g., sampler, math, and/or inter-thread communication logic) that can be shared by each N sub-cores within the graphics processor core 219. The shared and/or cache memory 236 can be a last-level cache for the set of N sub-cores 221A-221F within the graphics processor core 219, and can also serve as shared memory that is accessible by multiple sub-cores. The geometry/fixed function pipeline 237 can be included instead of the geometry/fixed function pipeline 231 within the fixed function block 230 and can include the same or similar logic units.
| # | Name | Date |
|---|---|---|
| 1 | 202144047021-FORM 1 [18-10-2021(online)].pdf | 2021-10-18 |
| 2 | 202144047021-DRAWINGS [18-10-2021(online)].pdf | 2021-10-18 |
| 3 | 202144047021-DECLARATION OF INVENTORSHIP (FORM 5) [18-10-2021(online)].pdf | 2021-10-18 |
| 4 | 202144047021-COMPLETE SPECIFICATION [18-10-2021(online)].pdf | 2021-10-18 |
| 5 | 202144047021-FORM-26 [18-01-2022(online)].pdf | 2022-01-18 |
| 6 | 202144047021-FORM 3 [12-04-2022(online)].pdf | 2022-04-12 |
| 7 | 202144047021-FORM 3 [17-10-2022(online)].pdf | 2022-10-17 |
| 8 | 202144047021-FORM 13 [14-04-2023(online)].pdf | 2023-04-14 |
| 9 | 202144047021-FORM 18 [07-02-2024(online)].pdf | 2024-02-07 |
| 10 | 202144047021-FER.pdf | 2025-05-14 |
| 11 | 202144047021-FORM 3 [23-06-2025(online)].pdf | 2025-06-23 |
| 12 | 202144047021-Proof of Right [15-10-2025(online)].pdf | 2025-10-15 |
| 13 | 202144047021-MARKED COPIES OF AMENDEMENTS [15-10-2025(online)].pdf | 2025-10-15 |
| 14 | 202144047021-FORM 13 [15-10-2025(online)].pdf | 2025-10-15 |
| 15 | 202144047021-AMMENDED DOCUMENTS [15-10-2025(online)].pdf | 2025-10-15 |
| 16 | 202144047021-OTHERS [16-10-2025(online)].pdf | 2025-10-16 |
| 17 | 202144047021-FER_SER_REPLY [16-10-2025(online)].pdf | 2025-10-16 |
| 18 | 202144047021-COMPLETE SPECIFICATION [16-10-2025(online)].pdf | 2025-10-16 |
| 19 | 202144047021-CLAIMS [16-10-2025(online)].pdf | 2025-10-16 |
| 20 | 202144047021-ABSTRACT [16-10-2025(online)].pdf | 2025-10-16 |
| 1 | Search_Strategy_MatrixE_12-11-2024.pdf |