Sign In to Follow Application
View All Documents & Correspondence

Disaggregation Of Soc Architecture

Abstract: Embodiments described herein provide techniques to disaggregate an architecture of a system on a chip integrated circuit into multiple distinct chiplets that can be packaged onto a common chassis. In one embodiment, a graphics processing unit or parallel processor is composed from diverse silicon chiplets that are separately manufactured. A chiplet is an at least partially packaged integrated circuit that includes distinct units of logic that can be assembled with other chiplets into a larger package. A diverse set of chiplets with different IP core logic can be assembled into a single device.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
03 December 2021
Publication Number
50/2021
Publication Type
INA
Invention Field
ELECTRONICS
Status
Email
ipo@iphorizons.com
Parent Application

Applicants

INTEL CORPORATION
2200 Mission College Boulevard, Santa Clara, California 95054, USA

Inventors

1. MATAM, Naveen
4086 Kalamata Way Rancho Cordova, California 95742 (US)
2. CHENEY, Lance
7130 Agora Way El Dorado Hills, California 95762 (US)
3. FINLEY, Eric
12050 Paine Rd Ione, California 95640 (US)
4. GEORGE, Varghese
460 Tobrurry Way Folsom, California 95630 (US)
5. JAHAGIRDAR, Sanjeev
1675 Stronsay Court Folsom, California 95630 (US)
6. KOKER, Altug
8241 Trevi Way El Dorado Hills, California 95762 (US)
7. MASTRONARDE, Josh
1212 Kondos Ave Sacramento, California 95814 (US)
8. RAJWANI, Iqbal
9579 Highland Park Dr. Roseville, California 95678 (US)
9. STRIRAMASSARMA, Lakshminarayanan
2073 Tarbolton Circle Folsom, California 95630 (US)
10. TESHOME, Melaku
4003 Hawick Way El Dorado Hills, California 95762 (US)
11. VEMULAPALLI, Vikranth
677 Westchester Drive Folsom, California 95630 (US)
12. XAVIER, Binoj
1645 Bowen Drive Folsom, California 95630 (US)

Specification

Claims:1. An apparatus comprising:
a package assembly comprising a plurality of chiplets (3602) and a plurality of interconnect structures, the plurality of chiplets (3602) including:
a first chiplet comprising a first base chiplet (3604; 2910) coupled to a bridge interconnect (2917; 3606) and an interconnect structure (2903), the first base chiplet (3604; 2910) including:
an interconnect fabric (2915), and
a first plurality of level 3 cache banks (2809A-2809N) to cache data read from and transmitted to a memory;
a second chiplet comprising a second base chiplet (3608), the second chiplet coupled to the first chiplet over the bridge interconnect; and
a third chiplet including a second plurality of level 3 cache banks, the third chiplet stacked on the first base chiplet (3604; 2910) in a 3D arrangement and coupled to the first base chiplet over the interconnect structure.
, Description:FIELD
[0003] Embodiments relate generally to the design and manufacturing of general-purpose graphics and parallel processing units.

BACKGROUND OF THE DESCRIPTION
[0004] Current parallel graphics data processing includes systems and methods developed to perform specific operations on graphics data such as, for example, linear interpolation, tessellation, rasterization, texture mapping, depth testing, etc. Traditionally, graphics processors used fixed function computational units to process graphics data; however, more recently, portions of graphics processors have been made programmable, enabling such processors to support a wider variety of operations for processing vertex and fragment data.
[0005] To further increase performance, graphics processors typically implement processing techniques such as pipelining that attempt to process, in parallel, as much graphics data as possible throughout the different parts of the graphics pipeline. Parallel graphics processors with single instruction, multiple thread (SIMT) architectures are designed to maximize the amount of parallel processing in the graphics pipeline. In an SIMT architecture, groups of parallel threads attempt to execute program instructions synchronously together as often as possible to increase processing efficiency. A general overview of software and hardware for SIMT architectures can be found in Shane Cook, CUDA Programming Chapter 3, pages 37-51 (2013).

BRIEF DESCRIPTION OF THE DRAWINGS
[0006] So that the manner in which the above recited features of the present embodiments can be understood in detail, a more particular description of the embodiments, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings, and in which:
[0007] FIG. 1 is a block diagram illustrating a computer system configured to implement one or more aspects of the embodiments described herein;
[0008] FIG. 2A-2D illustrate parallel processor components, according to an embodiment;
[0009] FIG. 3A-3C are block diagrams of graphics multiprocessors and multiprocessor-based GPUs, according to embodiments;
[0010] FIG. 4A-4F illustrate an exemplary architecture in which a plurality of GPUs is communicatively coupled to a plurality of multi-core processors;
[0011] FIG. 5 illustrates a graphics processing pipeline, according to an embodiment;
[0012] FIG. 6 illustrates a machine learning software stack, according to an embodiment;
[0013] FIG. 7 illustrates a general-purpose graphics processing unit, according to an embodiment;
[0014] FIG. 8 illustrates a multi-GPU computing system, according to an embodiment;
[0015] FIG. 9A-9B illustrate layers of exemplary deep neural networks;
[0016] FIG. 10 illustrates an exemplary recurrent neural network;
[0017] FIG. 11 illustrates training and deployment of a deep neural network;
[0018] FIG. 12 is a block diagram illustrating distributed learning;
[0019] FIG. 13 illustrates an exemplary inferencing system on a chip (SOC) suitable for performing inferencing using a trained model;
[0020] FIG. 14 is a block diagram of a processing system, according to an embodiment;
[0021] FIG. 15 is a block diagram of a processor according to an embodiment;
[0022] FIG. 16 is a block diagram of a graphics processor, according to an embodiment;
[0023] FIG. 17 is a block diagram of a graphics processing engine of a graphics processor in accordance with some embodiments;
[0024] FIG. 18 is a block diagram of hardware logic of a graphics processor core, according to some embodiments described herein;
[0025] FIG. 19A-19B illustrate thread execution logic including an array of processing elements employed in a graphics processor core according to embodiments described herein;
[0026] FIG. 20 is a block diagram illustrating a graphics processor instruction formats according to some embodiments;
[0027] FIG. 21 is a block diagram of a graphics processor according to another embodiment;
[0028] FIG. 22A-22B illustrate a graphics processor command format and command sequence, according to some embodiments;
[0029] FIG. 23 illustrates exemplary graphics software architecture for a data processing system according to some embodiments;
[0030] FIG. 24A is a block diagram illustrating an IP core development system, according to an embodiment;
[0031] FIG. 24B illustrates a cross-section side view of an integrated circuit package assembly, according to some embodiments described herein;
[0032] FIG. 25 is a block diagram illustrating an exemplary system on a chip integrated circuit, according to an embodiment;
[0033] FIG. 26A-26B are block diagrams illustrating exemplary graphics processors for use within an SoC, according to embodiments described herein;
[0034] FIG. 27 shows a parallel compute system, according to an embodiment;
[0035] FIG. 28A-28B illustrate a hybrid logical/physical view of a disaggregated parallel processor, according to embodiments described herein;
[0036] FIG. 30 illustrates a message transportation system for an interconnect fabric, according to an embodiment;
[0037] FIG. 31 illustrates transmission of messages or signals between functional units across multiple physical links of the interconnect fabric;
[0038] FIG. 32 illustrates transmission of messages or signals for multiple functional units across a single physical link of the interconnect fabric;
[0039] FIG. 33 illustrates a method of configuring a fabric connection for a functional unit within a disaggregated parallel processor;
[0040] FIG. 34 illustrates a method of relaying messages and/or signals across an interconnect fabric within a disaggregated parallel processor;
[0041] FIG. 35 illustrates a method of power gating chiplets on a per-workload basis;
[0042] FIG. 36 illustrates a parallel processor assembly including interchangeable chiplets;
[0043] FIG. 37 illustrates an interchangeable chiplet system, according to an embodiment;
[0044] FIG. 38 is an illustration of multiple traffic classes carried over virtual channels, according to an embodiment;
[0045] FIG. 39 illustrates a method of agnostic data transmitting between slots for interchangeable chiplets, according to an embodiment;
[0046] FIG. 40 illustrates a modular architecture for interchangeable chiplets, according to an embodiment;
[0047] FIG. 41 illustrates the use of a standardized chassis interface for use in enabling chiplet testing, validation and integration;
[0048] FIG. 42 Illustrates the use of individually binned chiplets to create a variety of product tiers; and
[0049] FIG. 43 illustrates a method of enabling different product tiers based on chiplet configuration.

DETAILED DESCRIPTION
[0050] In some embodiments, a graphics processing unit (GPU) is communicatively coupled to host/processor cores to accelerate graphics operations, machine-learning operations, pattern analysis operations, and various general-purpose GPU (GPGPU) functions. The GPU may be communicatively coupled to the host processor/cores over a bus or another interconnect (e.g., a high-speed interconnect such as PCIe or NVLink). In other embodiments, the GPU may be integrated on the same package or chip as the cores and communicatively coupled to the cores over an internal processor bus/interconnect (i.e., internal to the package or chip). Regardless of the manner in which the GPU is connected, the processor cores may allocate work to the GPU in the form of sequences of commands/instructions contained in a work descriptor. The GPU then uses dedicated circuitry/logic for efficiently processing these commands/instructions.

Documents

Application Documents

# Name Date
1 202148056125-Annexure [28-03-2025(online)].pdf 2025-03-28
1 202148056125-Correspondence to notify the Controller [11-02-2025(online)].pdf 2025-02-11
1 202148056125-FORM 1 [03-12-2021(online)].pdf 2021-12-03
2 202148056125-DRAWINGS [03-12-2021(online)].pdf 2021-12-03
2 202148056125-US(14)-HearingNotice-(HearingDate-13-03-2025).pdf 2025-02-11
2 202148056125-Written submissions and relevant documents [28-03-2025(online)].pdf 2025-03-28
3 202148056125-CLAIMS [10-03-2023(online)].pdf 2023-03-10
3 202148056125-DECLARATION OF INVENTORSHIP (FORM 5) [03-12-2021(online)].pdf 2021-12-03
3 202148056125-Correspondence to notify the Controller [11-02-2025(online)].pdf 2025-02-11
4 202148056125-US(14)-HearingNotice-(HearingDate-13-03-2025).pdf 2025-02-11
4 202148056125-FER_SER_REPLY [10-03-2023(online)].pdf 2023-03-10
4 202148056125-COMPLETE SPECIFICATION [03-12-2021(online)].pdf 2021-12-03
5 202148056125-OTHERS [10-03-2023(online)].pdf 2023-03-10
5 202148056125-FORM 18 [13-12-2021(online)].pdf 2021-12-13
5 202148056125-CLAIMS [10-03-2023(online)].pdf 2023-03-10
6 202148056125-FORM-26 [04-02-2022(online)].pdf 2022-02-04
6 202148056125-FORM 4(ii) [24-12-2022(online)].pdf 2022-12-24
6 202148056125-FER_SER_REPLY [10-03-2023(online)].pdf 2023-03-10
7 202148056125-OTHERS [10-03-2023(online)].pdf 2023-03-10
7 202148056125-Information under section 8(2) [14-12-2022(online)].pdf 2022-12-14
7 202148056125-FORM 3 [01-06-2022(online)].pdf 2022-06-01
8 202148056125-FORM 4(ii) [24-12-2022(online)].pdf 2022-12-24
8 202148056125-FER.pdf 2022-06-24
8 202148056125-FORM 3 [13-12-2022(online)].pdf 2022-12-13
9 202148056125-Information under section 8(2) [14-12-2022(online)].pdf 2022-12-14
9 202148056125-Proof of Right [13-12-2022(online)].pdf 2022-12-13
10 202148056125-FER.pdf 2022-06-24
10 202148056125-FORM 3 [13-12-2022(online)].pdf 2022-12-13
11 202148056125-FORM 3 [01-06-2022(online)].pdf 2022-06-01
11 202148056125-Information under section 8(2) [14-12-2022(online)].pdf 2022-12-14
11 202148056125-Proof of Right [13-12-2022(online)].pdf 2022-12-13
12 202148056125-FER.pdf 2022-06-24
12 202148056125-FORM 4(ii) [24-12-2022(online)].pdf 2022-12-24
12 202148056125-FORM-26 [04-02-2022(online)].pdf 2022-02-04
13 202148056125-FORM 18 [13-12-2021(online)].pdf 2021-12-13
13 202148056125-FORM 3 [01-06-2022(online)].pdf 2022-06-01
13 202148056125-OTHERS [10-03-2023(online)].pdf 2023-03-10
14 202148056125-COMPLETE SPECIFICATION [03-12-2021(online)].pdf 2021-12-03
14 202148056125-FER_SER_REPLY [10-03-2023(online)].pdf 2023-03-10
14 202148056125-FORM-26 [04-02-2022(online)].pdf 2022-02-04
15 202148056125-CLAIMS [10-03-2023(online)].pdf 2023-03-10
15 202148056125-DECLARATION OF INVENTORSHIP (FORM 5) [03-12-2021(online)].pdf 2021-12-03
15 202148056125-FORM 18 [13-12-2021(online)].pdf 2021-12-13
16 202148056125-COMPLETE SPECIFICATION [03-12-2021(online)].pdf 2021-12-03
16 202148056125-DRAWINGS [03-12-2021(online)].pdf 2021-12-03
16 202148056125-US(14)-HearingNotice-(HearingDate-13-03-2025).pdf 2025-02-11
17 202148056125-Correspondence to notify the Controller [11-02-2025(online)].pdf 2025-02-11
17 202148056125-DECLARATION OF INVENTORSHIP (FORM 5) [03-12-2021(online)].pdf 2021-12-03
17 202148056125-FORM 1 [03-12-2021(online)].pdf 2021-12-03
18 202148056125-Written submissions and relevant documents [28-03-2025(online)].pdf 2025-03-28
18 202148056125-DRAWINGS [03-12-2021(online)].pdf 2021-12-03
19 202148056125-FORM 1 [03-12-2021(online)].pdf 2021-12-03
19 202148056125-Annexure [28-03-2025(online)].pdf 2025-03-28

Search Strategy

1 SEARCHSTRATEGY-E_17-06-2022.pdf