Abstract: Data stream (45) having a representation of a neural network (10) encoded thereinto, the data stream (45) comprising serialization parameter (102) indicating a coding 5 order (104) at which neural network parameters (32), which define neuron interconnections (22, 24) of the neural network (10), are encoded into the data stream (45). To Be Published with Figure 4
Description:AS ATTACHED , Claims:I/We Claim:
1. Data stream (45) having a representation of a neural network (10) encoded thereinto, wherein the data stream (45) is structured into individually accessible portions (200), each portion representing a corresponding neural network portion of the neural network, wherein the data stream (45) comprises for each of one or more predetermined individually accessible portions (200) an identification parameter (310) for identifying the respective predetermined individually accessible portion.
2. Apparatus for encoding a representation of a neural network (10) into a data stream (45), so that the data stream (45) is structured into individually accessible portions (200), each portion representing a corresponding neural network portion of the neural network, wherein the apparatus is configured to provide the data stream (45) with, for each of one or more predetermined individually accessible portions, an identification parameter (310) for identifying the respective predetermined individually accessible portion.
3. Apparatus for decoding a representation of a neural network (10) from a data stream (45), wherein the data stream (45) is structured into individually accessible portions (200), each portion representing a corresponding neural network portion of the neural network, wherein the apparatus is configured to decode from the data stream (45), for each of one or more predetermined individually accessible portions, an identification parameter (310) for identifying the respective predetermined individually accessible portion.
4. Apparatus of claim 3, wherein the identification parameter (310) is related to the respective predetermined individually accessible portion via a hash function or error detection code or error correction code.
5. Apparatus of claim 3 or claim 4, wherein the apparatus is configured to decode, from the data stream (45), a higher-level identification parameter (310) for identifying a collection of more than one predetermined individually accessible portion.
6. Apparatus of claim 5, wherein the higher-level identification parameter (310) is related to the identification parameters (310) of the more than one predetermined individually accessible portion via a hash function or error detection code or error correction code.
7. Apparatus of any of previous claims 3 to 6, wherein the apparatus is configured to decode, from the data stream (45), the individually accessible portions (200) using context-adaptive arithmetic decoding and using context initialization at a start of each individually accessible portion.
8. Apparatus of any of previous claims 3 to 7, wherein the neural network portions comprise one or more sub-portions of a neural network layer (210, 30) of the neural network and/or one or more neural network layers of the neural network.
9. Apparatus of any previous claim 3 to 8, wherein the apparatus is configured to decode a representation of a neural network (10) from a data stream (45), into which same is encoded in a layered manner so that different versions (330) of the neural network are encoded into the data stream (45), and so that the data stream (45) is structured into one or more individually accessible portions (200), each portion relating to a corresponding version of the neural network, wherein the apparatus is configured to decode a first version (3302) of the neural network encoded from a first portion
by using delta-decoding relative to a second version (3301) of the neural network encoded into a second portion, and/or
by decoding from the data stream (45) one or more compensating neural network portions (332) each of which is to be, for performing an inference based on the first version (3302) of the neural network,
executed in addition to an execution of a corresponding neural network portion (334) of a second version (3301) of the neural network encoded into a second portion, and
wherein outputs of the respective compensating neural network portion (332) and corresponding neural network portion (334) are to be summed up.
10. Apparatus of any previous claim 3 to 9, wherein the apparatus is configured to decode a representation of a neural network (10) from a data stream (45), wherein the data stream (45) is structured into individually accessible portions (200), each portion representing a corresponding neural network portion of the neural network, wherein the apparatus is configured to decode from the data stream (45), for each of one or more predetermined individually accessible portions a supplemental data (350) for supplementing the representation of the neural network.
11. Apparatus of claim 10, wherein the data stream (45) indicates the supplemental data (350) as being dispensable for inference based on the neural network.
12. Apparatus of claim 10 or claim 11, wherein the apparatus is configured to decode the supplemental data (350) for supplementing the representation of the neural network for the one or more predetermined individually accessible portions (200) from further individually accessible portions, wherein the data stream (45) comprises for each of the one or more predetermined individually accessible portions a corresponding further predetermined individually accessible portion relating to the neural network portion to which the respective predetermined individually accessible portion corresponds.
13. Apparatus of any previous claim 10 to 12, wherein the supplemental data (350) relates to
relevance scores of neural network parameters (32), and/or
perturbation robustness of neural network parameters (32).
14. Apparatus of any previous claim 3 to 13, for decoding a representation of a neural network (10) from a data stream (45), wherein the apparatus is configured to decode from the data stream (45) hierarchical control data (400) structured into a sequence (410) of control data portions (420), wherein the control data portions provide information on the neural network at increasing details along the sequence of control data portions.
15. Apparatus of claim 14, wherein at least some of the control data portions (420) provide information on the neural network which is partially redundant.
16. Apparatus of claim 14 or claim 15, wherein a first control data portion provides the information on the neural network by way of indicating a default neural network type implying default settings and a second control data portion comprises a parameter to indicate each of the default settings.
17. Apparatus for performing an inference using a neural network, comprising
an apparatus for decoding a data stream (45) according to any of claims 3 to 16, so as to derive from the data stream (45) the neural network, and
a processor configured to perform the inference based on the neural network.
18. Method for encoding a representation of a neural network into a data stream, so that the data stream is structured into individually accessible portions, each portion representing a corresponding neural network portion of the neural network, wherein the method comprises providing the data stream with, for each of one or more predetermined individually accessible portions, an identification parameter for identifying the respective predetermined individually accessible portion.
19. Method for decoding a representation of a neural network from a data stream, wherein the data stream is structured into individually accessible portions, each portion representing a corresponding neural network portion of the neural network, wherein the method comprises decoding from the data stream, for each of one or more predetermined individually accessible portions, an identification parameter for identifying the respective predetermined individually accessible portion.
20. Computer program for, when executed by a computer, causing the computer to perform the method of claim 18 or claim 19.
| # | Name | Date |
|---|---|---|
| 1 | 202518071109-STATEMENT OF UNDERTAKING (FORM 3) [25-07-2025(online)].pdf | 2025-07-25 |
| 2 | 202518071109-REQUEST FOR EXAMINATION (FORM-18) [25-07-2025(online)].pdf | 2025-07-25 |
| 3 | 202518071109-POWER OF AUTHORITY [25-07-2025(online)].pdf | 2025-07-25 |
| 4 | 202518071109-FORM 18 [25-07-2025(online)].pdf | 2025-07-25 |
| 5 | 202518071109-FORM 1 [25-07-2025(online)].pdf | 2025-07-25 |
| 6 | 202518071109-DRAWINGS [25-07-2025(online)].pdf | 2025-07-25 |
| 7 | 202518071109-DECLARATION OF INVENTORSHIP (FORM 5) [25-07-2025(online)].pdf | 2025-07-25 |
| 8 | 202518071109-COMPLETE SPECIFICATION [25-07-2025(online)].pdf | 2025-07-25 |
| 9 | 202518071109-Proof of Right [13-08-2025(online)].pdf | 2025-08-13 |