Sign In to Follow Application
View All Documents & Correspondence

A Bayesian Compute Unit With Reconfigurable Sampler And Methods And Apparatus To Operate The Same

Abstract: Methods, apparatus, systems, and articles of manufacture providing a Bayesian compute unit with reconfigurable sampler and methods and apparatus to operate the same are disclosed. An example apparatus includes a processor element to generate (a) a first element by applying a mean value to an activation and (b) a second element by applying a variance value to a square of the activation, the mean value and the variance value corresponding to a single probability distribution; a programmable sampling unit to: generate a pseudo random number; and generate an output based on the pseudo random number, the first element, and the second element, wherein the output corresponds to the single probability distribution; and output memory to store the output.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
16 March 2022
Publication Number
49/2022
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

INTEL CORPORATION
2200 Mission College Boulevard, Santa Clara, California 95054, USA

Inventors

1. SRIVATSA RS
1091 NE Orenco Station PKWY APT. E316 Hillsboro, OR 97124 USA
2. INDRANIL CHAKRABORTY
1984 W El Camino Real Apt 425 Mountain View, CA 94040 USA
3. RANGANATH KRISHNAN
2111 NE 25th Ave Mail stop: JF2-65 Hillsboro, OR 97124 USA
4. UDAY A KORAT
802 Canyon Terrace Ln Folsom, CA 95630 USA
5. MULUKEN HAILESELLASIE
1776 Almaden Rd #209 San Jose, CA 95125 USA
6. JAINAVEEN SUNDARAM PRIYA
7620 SE Monarch Lane Hillsboro, OR 97123 USA
7. DEEPAK DASALUKUNTE
16090 NW Ashfield Dr Beaverton, OR 97006 USA
8. DILEEP KURIAN
X-102, Tower-1 Adarsh Palm Retreat, Bellandur Bangalore, KA 560093 India
9. TANAY KARNIK
14585 NW Oak Shadow Ct Portland, OR 97229 USA

Specification

Claims:1. A node of a neural network to apply a plurality of weights in an artificial intelligence-based model, the node comprising:
a processor element to generate (a) a first element by applying a mean value to an activation and (b) a second element by applying a variance value to a square of the activation, the mean value and the variance value corresponding to a single probability distribution;
a programmable sampling unit to:
generate a pseudo random number; and
generate an output based on the pseudo random number, the first element, and the second element, wherein the output corresponds to the single probability distribution; and
output memory to store the output.
, Description:RELATED APPLICATIONS
[0001] The present application claims priority to India Provisional Patent Application No. 202141024844 filed June 4, 2021 and titled “RECONFIGURABLE HIGH-PERFORMANCE ARCHITECTURE AND EFFICIENT ALGORITHMS TO LEVERAGE INPUT RE-USE IN BAYESIAN DNN WORKLOADS” the entire disclosure of which is hereby incorporated by reference.
[0002] The present application claims priority to U.S. Non-Provisional Patent Application No. 17/483,382 filed September 23, 2021 and titled “BAYESIAN COMPUTE UNIT WITH RECONFIGURABLE SAMPLER AND METHODS AND APPARATUS TO OPERATE THE SAME” the entire disclosure of which is hereby incorporated by reference.

FIELD OF THE DISCLOSURE
[0003] This disclosure relates generally to machine learning, and, more particularly, to a Bayesian compute unit with reconfigurable sampler and methods and apparatus to operate the same.

BACKGROUND
[0004] In recent years, artificial intelligence (e.g., machine learning, deep learning, etc.) have increased in popularity. Artificial intelligence may be implemented using neural networks. Neural networks are computing systems inspired by the neural networks of human brains. A neural network can receive an input and generate an output. The neural network includes a plurality of neurons corresponding to weights can be trained (e.g., can learn, be weighted, etc.) based on feedback so that the output corresponds a desired result. Once the weights are trained, the neural network can make decisions to generate an output based on any input. Neural networks are used for the emerging fields of artificial intelligence and/or machine learning. A Bayesian neural network is a particular type of neural network that includes neurons that generate a variable weight as opposed to a fixed weight. The variable weight falls within a probability distribution defined by a mean value and a variance determined during training of the Bayesian neural network.

BRIEF DESCRIPTION OF THE DRAWINGS
[0005] FIG. 1 is a schematic illustration of an example Bayesian neural network.
[0006] FIG. 2A and 2B are block diagrams of example implementations of a layer of the Bayesian neural network of FIG. 1.
[0007] FIG. 3 illustrates an example of the data processing flow of example processing elements of the layers of FIGS. 2A and/or 2B.
[0008] FIG. 4A is a block diagram of an example implementation of the programmable sampling unit of the layer of FIGS. 2A and/or 2B.
[0009] FIG. 4B is a circuit diagram of an example implementation of the programmable sampling unit of the layer of FIGS. 2A and/or 2B.
[0010] FIGS. 5-6 illustrate a flowchart representative of example machine readable instructions which may be executed to implement the example Bayesian compute node and/or the programmable sampling unit of FIGS. 2A, 2B, 3, and/or 4.
[0011] FIG. 7 is a block diagram of an example processing platform structured to execute the instructions of FIGS. 5-6 to implement the example Bayesian compute node and/or programmable sampling unit of FIGS. 2A, 2B, 3, and/or 4.
[0012] FIG. 8 is a block diagram of an example implementation of the processor circuitry of FIG. 7.
[0013] FIG. 9 is a block diagram of another example implementation of the processor circuitry of FIG. 7.
[0014] FIG. 10 is a block diagram of an example software distribution platform to distribute software (e.g., software corresponding to the example computer readable instructions of FIGS. 5-6 to client devices such as consumers (e.g., for license, sale and/or use), retailers (e.g., for sale, re-sale, license, and/or sub-license), and/or original equipment manufacturers (OEMs) (e.g., for inclusion in products to be distributed to, for example, retailers and/or to direct buy customers).
[0015] The figures are not to scale. In general, the same reference numbers will be used throughout the drawing(s) and accompanying written description to refer to the same or like parts. Connection references (e.g., attached, coupled, connected, and joined) are to be construed broadly and may include intermediate members between a collection of elements and relative movement between elements unless otherwise indicated. As such, connection references do not necessarily infer that two elements are directly connected and in fixed relation to each other. Although the figures show layers and regions with clean lines and boundaries, some or all of these lines and/or boundaries may be idealized. In reality, the boundaries and/or lines may be unobservable, blended, and/or irregular.
[0016] Descriptors "first," "second," "third," etc. are used herein when identifying multiple elements or components which may be referred to separately. Unless otherwise specified or understood based on their context of use, such descriptors are not intended to impute any meaning of priority, physical order or arrangement in a list, or ordering in time but are merely used as labels for referring to multiple elements or components separately for ease of understanding the disclosed examples. In some examples, the descriptor "first" may be used to refer to an element in the detailed description, while the same element may be referred to in a claim with a different descriptor such as "second" or "third." In such instances, it should be understood that such descriptors are used merely for ease of referencing multiple elements or components.

DETAILED DESCRIPTION
[0017] Machine learning models, such as neural networks, are used to perform a task (e.g., classify data). Machine learning can include a training stage to train the model using ground truth data (e.g., data correctly labelled with a particular classification). Training a traditional neural network adjusts the weights of neurons of the neural network. After trained, data is input into the trained neural network and the weights of the neurons are applied to input data to be able to process the input data to perform a function (e.g., classify data).

Documents

Application Documents

# Name Date
1 202244014143-FORM 1 [16-03-2022(online)].pdf 2022-03-16
2 202244014143-DRAWINGS [16-03-2022(online)].pdf 2022-03-16
3 202244014143-DECLARATION OF INVENTORSHIP (FORM 5) [16-03-2022(online)].pdf 2022-03-16
4 202244014143-COMPLETE SPECIFICATION [16-03-2022(online)].pdf 2022-03-16
5 202244014143-FORM-26 [11-07-2022(online)].pdf 2022-07-11
6 202244014143-FORM 3 [14-03-2023(online)].pdf 2023-03-14
7 202244014143-FORM 3 [14-09-2023(online)].pdf 2023-09-14
8 202244014143-FORM 3 [14-03-2024(online)].pdf 2024-03-14
9 202244014143-FORM 18 [27-05-2025(online)].pdf 2025-05-27