Abstract: [1] In recent decades, quantum computing-based cryptography has become the latest and more secure for the transmission of data through different protocols that could be wired or wireless in communications. In order to optimize power dynamic and static utilization in digital circuits, Machine Learning (ML) and Quantum computing are playinga major role in designing and implementing cryptography systems. ML is a widely used trained method in any high-level synthesis (HLS) for better and faster performance, and to the estimation of power and hardware resources utilization before implementing on downstream application on FPGA. High-quality and large-volume datasets are necessary for training ML models in order to make accurate predictions. To train ML models connected to HLS, practitioners must create their own dataset because the present datasets utilized in this field are either private or have restricted use.The combination of these two techniques will increase the security level and be used for key authentications and integrity. Quantum Key Distribution (QKD) along with ML enables the communicating parties to detect the side channel effects and protection of keys from noisy channels. The QKD generates random keys which can use as private and public keys for data exchange between two parties to ensure these two parties have proper access to meaningful parts of the key. The SHA-256 generates 256-bit hash values that are used for authenticating the signatures and data on the fly so that encryption and decryption can process their operation without waiting for hash values as private and public keys. The proposed design has been validated using benchmarking the overhead and measured performance degradation and shown their suitability for SoC and FPGA systems. The complete design is synthesized using Vivado Design Suite 2018.1 and interfaced with Software Development Kit (SDK) for validation of data transfer between the user application and FPGA.
Description:DESCRIPTION OF THE INVENTION
[11 ] The master and slave want to share important information other than random numbers like secrete photos, important passwords, and any other useful information, to provide security to them, required a lot of memory, and area and also consumes more power so in order to optimize the hybrid techniques such as ML and QKD are incorporated in the design. The QKD-based encryption uses module 2 addition between information and key, the key is generated by quantum computing, and the encryption expression is given by
Q_e=M⊕_2 Q_k--------(1)
Where Q_eis quantum encryption, M is secreted information to be transmitted and Q_k is the key generated by quantum, and M is given by
M=Q_e ⨁_2 Q_k--------(2)
[12 ] The BB84 protocol is a basic quantum cryptography protocol and it is used to share secrete keys without pre-sharing of the secret key. The information exchange between master and slave procedure is as follows.
Master selects a random bit that is generated by the QKD generator.
Master selects a random basis which is denoted by B_z or B_x using a random bit generator. This random bit is encoded and then transmits the “qubit” to the slave through the quantum channel as shown in Fig.1.Slave selects a random basis (B_z or B_x) using a random generator and it measured the received qubit and decodes the original bits. Once both are shared same basis then it is nothing but both are having same secret bits. If both don’t have the same bits then they share the same bit with a probability of ½ and a different bit with the same probability.
Both repeat the above three steps until a reasonable amount of bits have been exchanged. Later both are shared the basis used for encoding and decoding original information through classical channels as shown in Fig.1. They both discard every bit when the basis is not the same and keep the other one without revealing the value of the bits. In the final stage, both will share an identical and same private key.
Based on Fig.1 slave results and bases of both are the same (4,5,6,8) master and slave share the same bit, when both bases are not same then the result would be like |0>/|1> which represents measurement can be either probability of ½, both of them don’t know the key as shown in Table.1.
[ 13] The hybrid techniques including ML and Quantum will give a guarantee of accurate metrics in terms of power optimization, speed improvement, and minimal latency. The proposed system required a huge dataset to produce better results in terms of accuracy and this drawback can be acceptable since HSL is a high-speed processing system with the following requirements.
Step 1. During testing mode, In SDK, the design code should cover all possible test cases/vectors to get acceptable results.
Step 2. The design code should be able to allow for any changes during compilation with HSL directives to achieve optimal results and able to apply any optimizations changes.
Step 3. The Advanced optimization strategies can able to apply to the design code so that a wide range of hardware applications can be generated.
Step 4. The hardware implementation is required before and after post-implementation metrics to achieve excepted results.
[14 ] A boxplot, which displays the range spanning the majority of the distribution as well as the extremes, is used to depict the distribution of the weight values. Additionally, the grey boxes display the user-provided precision configuration. In general, it is essential that the chosen type may represent the biggest absolute valued weights (the boxes overlap to the right of the plot). With little effect on accuracy, it is possible to truncate tiny valued weights in order to lower the precision. The settings may be readily adjusted using this supplementary visualization tool for more effective inference. By using fewer bits to indicate each weight, the NNs have been examined as a technique to compress NNs. With regard to selecting the data type and accuracy, FPGAs offer a great deal of flexibility. Both options should be carefully studied to avoid wasting FPGA resources and adding extra delay. We utilize fixed-point arithmetic in hls4ml because it uses fewer resources and responds more quickly than floating-point arithmetic. Each layer's inputs, weights, biases, sums, and outputs are all fixed-point integers. For each, the use case can be specified independently for the number of bits utilized to represent the integer and fractional parts.Performance can be greatly decreased without suffering from a loss of accuracy.In the first implementation, the number of multipliers made accessible to the kernel is constrained by the number of nonzero weights using HLS pre-processor directives, and HLS is left to carry out the optimization. Only lower network levels can support this. The indices are packed into the weights themselves in the second implementation's coordinate list (COO) form, which compresses nonzero weights. Each layer may have a Boolean compression parameter specified by the hls4ml user to turn on this kernel.
, Claims:1. Our Invention “INNOVATION OF ML AND QUANTUM CRYPTOGRAPHY FOR SOC APPLICATIONS”. Additionally, a comparative the inventions is to uses ML-Quantum computing-based flow design of processor for high throughput and minimal latency has been carried out.
2. According to claim1# the invention is to our examine analysis and Estimation of Quality of Results in HLS with ML and quantum qubit. The efficiency of the CNN implementation may be significantly increased with little to no performance loss by reducing the precision of the NN computations and eliminating unnecessary calculations.
3. According to claim1,2# Finally, this invention provides a future perspective that focuses on improved effectiveness, they could choose an ASIC implementation rather than an FPGA implementation. But compared to developing for FPGAs, ASIC design is far more difficult and time-consuming. At different degrees of abstraction in the ASIC design cycle, verification, and power analysis are more important. The ASIC process combined with HLS to ML.
4. According to claim1,2,3# the invention is to focuses on identify completely the primary benefit over comparable methods is the speed of the code design process utilizing a turnkey, all-in-one workflow for several artificial intelligence models and devices.
5. According to claim1,2,3,4# the invention is to a invention, new strategies inventions to the key components of the HLS for ML and Quantum workflow, including network optimization methods that can be seamlessly integrated into device executions, such as pruning and quantization-aware training. By adding new Python APIs, quantization-aware pruning, end-to-end FPGA workflows, long pipeline kernels for low power, new device back-ends with an ASIC workflow, and other capabilities and techniques, we build on previous HLS for ML and Quantum work
| # | Name | Date |
|---|---|---|
| 1 | 202341047886-REQUEST FOR EARLY PUBLICATION(FORM-9) [16-07-2023(online)].pdf | 2023-07-16 |
| 2 | 202341047886-FORM-9 [16-07-2023(online)].pdf | 2023-07-16 |
| 3 | 202341047886-FORM 1 [16-07-2023(online)].pdf | 2023-07-16 |
| 4 | 202341047886-FIGURE OF ABSTRACT [16-07-2023(online)].pdf | 2023-07-16 |
| 5 | 202341047886-DRAWINGS [16-07-2023(online)].pdf | 2023-07-16 |
| 6 | 202341047886-COMPLETE SPECIFICATION [16-07-2023(online)].pdf | 2023-07-16 |