Sign In to Follow Application
View All Documents & Correspondence

A System And Method Enabling Multi Core Communication Using Field Programmable Gate Array Queues And Messages

Abstract: A system and method for multi-core communication using field programmable gate array queues and messages is described . Thesystem in one embodiment provides a dedicated bus for connection of various computing nodes to a system management unit. Further, a dedicated inward queues and outward queues working under instruction from the queue manager module is also present in an embodiment. Further, an owner computing node is configured to post the available resources and computing nodes other than the owner computing node are configured to read the queue on an arbitrated basis. Refer Fig. 2.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
27 September 2018
Publication Number
14/2020
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
info@krishnaandsaurastri.com
Parent Application

Applicants

Bharat Electronics Limited
Corporate Office, Outer Ring Road, Nagavara, Bangalore, Karnataka, India, Pin Code–560 045.

Inventors

1. Suja Susan George
CPG/Central D&E Bharat Electronics Limited, Jalahalli PO, Banglore, Karnataka, India, Pin Code-560 013.
2. Sivanantham S
CPG/Central D&E Bharat Electronics Limited, Jalahalli PO, Banglore, Karnataka, India, Pin Code-560 013.

Specification

Claims:
We Claim :
1. A system for multi-core communication using field programmable gate array queues and messages, comprising:
a queue manager module configured to manage various queues of the multiple cores present;
a free buffer queue manager provide location in the storage to place data to the system requiring processing thereof;
a quick message manager module configured to allow communication between cores with minimal latency; and
an interrupt manager module configured to generate interrupt based on status of the a queue manager module, a free buffer queue manager module, and a quick message manager module.

2. The system as in claim 1, wherein the queue manager module comprises of:
a queues sub-module that deals with the queues operation; and
a queues status sub-module that maintains the status of various queues.

3. The system as in claim 1, wherein free buffer queue manager module further comprises of:
a free buffer queues sub-module dealing with the free buffer queues operations;
a free buffer queues status sub-module maintaining the status of various free buffer queues.

4. The system as in claim 1, wherein a quick message manager module further comprises of:
a quick message sub-module dealing with the quick message operations; and
a quick message status sub-module maintaining the status of various quick message.
5. The system as in claim 1, further comprising:
a dedicated bus for connection of various computing nodes to a system management unit.

6. The system as in claim 1, further comprising:
a dedicated inward queues and outward queues working under instruction from the queue manager module.

7. The system as in claim 3, wherein an owner computing node is configured to post the available resources.

8. The system as in claim 7, wherein computing nodes other than the owner computing node are configured to read the queue on an arbitrated basis.

9. The system as in claim 5, wherein the system management unit is further configured to maintain a dedicated quick messages per core for transport of high priority messages from one core to another core.

10. The system as in claim 1, wherein for a write mode a quick message connection module is configured to execute the steps of:
checking a quick message status sub-module;
opening of a connection in write mode in one core if the quick message status is not full;
updating data in a quick manager sub-module; and
generating an interrupt through an interrupt manager module.

11. The system as in claim 1, wherein for a read mode a quick message connection module is configured to execute the steps of:
waiting for the data by a consumer node;
opening the quick message sub-module in read mode if a data interrupt occurs; and
reading of data from the quick message sub-module by consumer node raising a read request.
, Description:FORM – 2

THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003

COMPLETE SPECIFICATION
(SEE SECTION 10, RULE 13)

A SYSTEM AND METHOD ENABLING MULTI-CORE COMMUNICATION USING FIELD PROGRAMMABLE GATE ARRAY QUEUES AND MESSAGES

Bharat Electronics Limited,
Corporate Office, Outer Ring Road, Nagavara, Bangalore – 560045, Karnataka, India

THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.
TECHNICAL FIELD
[001] The invention herein relates generally to a computing systems, and more particularly to multi-core processors in multiple processor system.
BACKGROUND
[002] Computing systems have continued to develop since their inception to catch up with increased computing requirements. Multiple core processors offer a solution by processing parts of computing problems in parallel. With increase in number of cores, newer problems such as need of an efficient synchronous communication between various cores arise.
[003] One solution is proposed in WO 2013052695 that describes inter-processor communication IPC apparatus including an arbitrated bus coupling the processors to one another and to the shared memory, a plurality of buffers in the shared memory, each of the buffers associated with one of the processors, and at least one pair of hardware queues coupled to each of the processors, the pair of hardware queues holding pointers to each of the buffers associated with that processor, wherein a first of the queues is associated with empty buffers of that processor while a second of said pair of queues is associated with buffers containing messages for that processor.
[004] Another solution is proposed in Indian Patent Application number IN/PCT/2007/09595/DEL describes a method, and corresponding system and software, for writing data to a plurality of queues, each portion of the data being written to a corresponding one of the queues. The method includes, without requiring concurrent locking of more than one queue, determining if a space is available in each queue for writing a corresponding portion of the data, and if available, reserving the spaces in the queues. The method includes writing each portion of the data to a corresponding portion of data and if available, reserving the space in the queues. The method includes writing each portion of the data to a corresponding one of the queues.
[005] Another solution is proposed in Indian Patent Application number 405/MUMNP/2013 describes communication techniques that may be used within a multiple-processor computing platform. The techniques may, in some examples, provide software interfaces that may be used to support message passing within a multiple-processor computing platform that initiates tasks using command queues. The techniques may provide software interfaces that may be used for shared memory inter-processor communication within a multiple -processor computing platform. In further examples, the techniques may provide a graphics processing unit (GPU) that includes hardware for supporting message passing and/or shared memory communication between the GPU and a host CPU.
[006] Another solution is proposed in Indian Patent Application number 201741047441 which BEL applied describes the various methods for processing a digital signal using one or more multi-core processors. An aspect of the present disclosure pertains to a system including a digital signal receive module to receive the digital signal pertaining to a process; a data packet writing module, to write plurality of data packets of the digital signal in a volatile memory; a digital signal analysis module to obtain one or more signal parameters by analysing the digital signal to determine signal type; a data packet reading module to read the plurality of data packets from the volatile memory; and a digital signal processing module, to process the digital signal based on data parallelism techniques. Embodiments of the present disclosure aids in providing better inter-process communication, thereby resolving synchronization issues between the processor and the memory and speeding up the memory read/write operations.
[007] There is still a need for an efficient system to communicate between multiple cores of the processors and transferring data in synchronous manner between various cores.
SUMMARY
[008] A system for multi-core communication using field programmable gate array queues and messages is provided. The system in one embodiment comprises of a queue manager module configured to manage various queues of the multiple cores and computing nodes present. Further, a free buffer queue manager module configured to provide location in the storage to place data to the system requiring processing thereof. A quick message manager module configured to allow communication between cores with minimal latency. An interrupt manager module configured to generate interrupt based on status of the a queue manager module, a free buffer queue manager module, and a quick message manager module.
[009] The system in one embodiment also providesa dedicated bus for connection of various computing nodes to a system management unit. Further, a dedicated inward queues and outward queues working under instruction from the queue manager module is also present in an embodiment. Further, an owner computing node is configured to post the available resources and computing nodes other than the owner computing node are configured to read the queue on an arbitrated basis.
[0010] Further in an embodiment a system management unit is further configured to maintain a dedicated quick messages per core for transport of high priority messages from one core to another core.
[0011] Furthermore, in an embodiment opening a quick message connection in write mode is provided to comprise the steps of checking a quick message status sub-module, opening of a connection in write mode in one core if the quick message status is not full, updating data in a quick manager sub-module, and generating an interrupt through an interrupt manager module.
[0012] Furthermore, in an embodiment opening a quick message connection in read mode is also provided. The method has the steps of waiting for the data by a consumer node, opening the quick message sub-module in read mode if a data interrupt occurs, and reading of data from the quick message sub-module by consumer node raising a read request.
BRIEF DESCRIPTION OF ACCOMPANYING DRAWINGS
[0013] The detailed description is described with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the drawings to reference like features and modules.
[0014] Fig. 1 illustrates a schematic of an ecosystem for multi-core communications as per an embodiment herein.
[0015] Fig. 2illustrates a system for multi-core communication using field programmable gate array queues and messages as per an embodiment herein.
[0016] Fig.3illustrates a schematic showing the queue implementation in the system for multi-core communication using field programmable gate array queues and messages as per an embodiment herein.
[0017] Fig. 4illustrates a schematic of the free buffer queue implementation as per an embodiment herein.
[0018] Fig.5illustrates a flow chart showing the steps involved in opening a quick message connection in write mode as per an embodiment herein.
[0019] Fig.6illustrates a flow chart showing the steps involved in opening a quick message connection in read mode as per an embodiment herein.
[0020] It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative methods embodying the principles of the present disclosure. Similarly, it will be appreciated that any flow charts, flow diagrams, and the like represent various processes which may be substantially represented in computer readable medium and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.
DETAILED DESCRIPTION
[0021] A system and method for multi-core communication using field programmable gate array queues and messages is described.
[001] In the following description, for purpose of explanation, specific details are set forth in order to provide an understanding of the present claimed subject matter. It will be apparent, however, to one skilled in the art that the present claimed subject matter may be practiced without these details. One skilled in the art will recognize that embodiments of the present claimed subject matter, some of which are described below, may be incorporated into a number of systems. Multiple cores referred to at various places may refer to those belonging to multiple computing nodes/processing units (CPUs).
[002] However, the systems and methods are not limited to the specific embodiments described herein. Further, structures and devices shown in the figures are illustrative of exemplary embodiments of the presently claimed subject matter and are meant to avoid obscuring of the presently claimed subject matter.
[003] Furthermore, connections between components and/or modules within the figures are not intended to be limited to direct connections. Rather, these components and modules may be modified, re-formatted or otherwise changed by intermediary components and modules.
A. Overview
[004] Fig. 1 illustrates a schematic representing an exemplary ecosystem under which a system for enabling multi-core communication using field programmable gate array (also referred to as FPGA) queues and messages is implemented as per an embodiment herein.
[005] In one embodiment the system may be in communication with various computing nodes. For example, in the exemplary embodiment shown in Figure 1, four computing nodes: 102a, 102b, 102c, and 102d are present in communication with the system management unit 101. This may be done through communication links 103a, 103b, 103c, and103d. These links may be bi-directional and may include buses as well as management related communication channels. Each computing node may be identified by the system management unit 101. This may be done during the startup of the system.
[006] Figure 2, shows the system management unit 101 as per an embodiment herein. The system management unit may comprise of various modules. A queue manager module 201 may be configured to manage various queues of the multiple cores present in the system. It may comprise of a queues sub-module 201a that deals with the queues operation and queues status sub-module 201bthat may maintain the status of various queues. A dedicated connection may be present from various computing nodes to the queue manager module.
[007] A free buffer queue manager module 202, may be configured to provide smooth placing of data in storage to the system requiring processing thereof. The buffer manager may be used by the other modules to perform various operations like read, write, allocate, de-allocate data in memory spaces. This may be further divided into a free buffer queues sub-module dealing with the operations of the free buffer queues and a free buffer queues status sub-module maintaining the status of the various such free buffer queues.
[008] Further, a quick message manager module 203, may be present to allow communication between cores without any latency. This may be divided into quick message sub-module 203a that deals with the operations of the quick messages and a quick messages status sub-module 203b that maintains the status of the various quick messages. Further, an interrupt manager module 204 may be present configured to interrupt an operation. Therefore, it may provide a mechanism for quick response to requests to satisfy the critical time constraints of a system operation. Based on the requirements of various modules namely, queues manager, free buffer manager, and quick message manager interrupts manager module may be informed. This may be effected based on the results of the status sub-modules of various modules.
[009] Fig. 3 illustrates a schematic showing the queue implementation in the system for multi-core communication using field programmable gate array queues and messages as per an embodiment herein.
[0010] Various message queues relating to multiple computing nodes or central processing units (CPUs) may be present. The Queue manager may maintain one or more inward and outward registers whose content may be copied to/from different queues belonging to specific core buses namely, 302a, 302b, 302c and 302d corresponding to the four cores in the exemplary embodiment shown in figure 3, ie. CPU1 Core 0/1 CPU2 core 0/1 CPU 3 core 0/1 and CPU4 core 0/1.In the figure 3, an exemplary embodiment shows four CPU core queues namely CPU1 Core0/1 message queue 301, CPU2 Core0/1 message queue 312, CPU3 Core0/1 message queue 313, and CPU4 Core0/1 message queue314.
[0011] These queues may be present corresponding to various addresses in this exemplary embodiment namely, Address 0x50/51 305, Address 0x52/53 306, Address 0x54/55307, and address 0x56/57 308. Thus, every core of the processor will get one outward queue and one inward queue from other processor cores. System management unit allows a core to update its own free buffers available in separate queues; other cores can only read this queue. System management unit is also configured to provide an option for transfer of high priority messages from one computing node to another computing node.
[0012] Fig. 4 illustrates a schematic of the free buffer queue implementation as per an embodiment herein. For example, various message queues relating to cores of computing nodes namely CPU1 core0/1 402a , CPU2 core0/1 402b , CPU3 Core0/1 402c and CPU4 Core0/1 402dmay be in communication with CPU1 Core0/1 Free queue address 0x60/61 401, CPU2 Core0/1 Free queue address 0x62/63403CPU3 Core0/1 Free queue address 0x64/65404 and CPU4 Core0/1 Free queue address 0x66/67 405. Owner computing node may write the available resources in an inward register which in turn may be copied into a queue based on a write request signal. The access of the resources by the computing nodes is based on the arbitration. To this effect, the free buffer queue manager may be configured to receive such request from a computing node.
[0013] The free buffer queue manager module may be configured to provide the access to the resource based on the first come first serve basis. The requesting computing node may be configured to issue a read request before it can access the resource. The consumer node may be configured/instructed by the system management unit to release the request after the resource access. The system management unit provides a feature wheiren if the above step is not performed the access request will be expired after timeout.
[0014] The working and configuration of quick message manager module 203 may be explained in reference to Fig. 5 illustrating a flow chart showing the steps involved in opening a quick message connection in write mode as per an embodiment herein.
[0015] The Quick message manager module may be used to send a message from one core to another core without any latency. The Quick message manager consists of quick message sub-module, and status sub-module. The status sub-module may be in present in form of one or more registers. Also it reports the interrupt manager in case of message full and empty. The quick message manager module may be dedicated to each computing nodes. As per an embodiment herein, the method of opening a quick message connection in write mode may comprise the steps of checking 501 a quick message status sub-module. Especially, a check whether the status is full may take place. If the status is not full, opening 502 of a connection in write mode in one core takes place. This may be effected by raising a write request by an owner computing node. Updating503 data in a quick manager sub-module may take place. Thereafter the step of generating 504an interrupt through an interrupt manager module takes place.
[0016] Fig. 6 illustrates a flow chart showing the steps involved in opening a quick message connection in read mode as per an embodiment herein. The method may comprise the steps of waiting 601 for the data by a consumer node. Then opening 602the quick message sub-module in read mode if a data interrupt occurs. Reading 603 of data from the quick message sub-module then takes place. This is effected by a consumer node raising a read request.
[0017] Thus the method to communicate between multiple cores of the processors and transferring data in synchronous manner as described has many benefits. The system includes a dedicated bus which may be used to connect each processor to all other processors through the system management unit 101. This may be implemented in Field programmable Gate Arrays (FPGAs). This system may also include dedicated inward and outward message queues, free buffer queues, high priority quick messages and interrupt mechanism to the processor cores to alert it in case of message reception and queue empty/full status.
[0018] The system management unit 101 may assign a separate address for each message queues, free buffer queues and high priority quick messages. The blue print of the address map is communicated to all the computing nodes of the processors. The system management unit may fix the queue size at the initial phase and the same may be intimated to all the computing nodes. Dedicated bus is used to connect one processor to all other processors through the system management unit 101which is implemented in FPGAs.
[0019] It should be noted that the description merely illustrates the principles of the present invention. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described herein, embody the principles of the present invention. Furthermore, all examples recited herein are principally intended expressly to be only for explanatory purposes to help the reader in understanding the principles of the invention and the concepts contributed by the inventor to furthering the art and are to be construed as being without limitation to such specifically recited examples and conditions. Moreover, all statements herein reciting principles, aspects, and embodiments of the invention, as well as specific examples thereof, are intended to encompass equivalents thereof.
[0020] The foregoing description of the invention has been set merely to illustrate the invention and is not intended to be limiting. Since modifications of the disclosed embodiments incorporating the substance of the invention may occur to person skilled in the art, the invention should be construed to include everything within the scope of the invention.

Documents

Application Documents

# Name Date
1 201841036538-STATEMENT OF UNDERTAKING (FORM 3) [27-09-2018(online)].pdf 2018-09-27
2 201841036538-FORM 1 [27-09-2018(online)].pdf 2018-09-27
3 201841036538-FIGURE OF ABSTRACT [27-09-2018(online)].pdf 2018-09-27
4 201841036538-DRAWINGS [27-09-2018(online)].pdf 2018-09-27
5 201841036538-DECLARATION OF INVENTORSHIP (FORM 5) [27-09-2018(online)].pdf 2018-09-27
6 201841036538-COMPLETE SPECIFICATION [27-09-2018(online)].pdf 2018-09-27
7 201841036538-Proof of Right (MANDATORY) [13-11-2018(online)].pdf 2018-11-13
8 Correspondence by gent_Form1_26-11-2018.pdf 2018-11-26
9 201841036538-FORM-26 [27-12-2018(online)].pdf 2018-12-27
10 Correspondence by Agent_Power of Attorney_07-01-2019.pdf 2019-01-07
11 201841036538-FORM 18 [10-02-2021(online)].pdf 2021-02-10
12 201841036538-FER.pdf 2022-01-19
13 201841036538-OTHERS [18-07-2022(online)].pdf 2022-07-18
14 201841036538-FER_SER_REPLY [18-07-2022(online)].pdf 2022-07-18
15 201841036538-DRAWING [18-07-2022(online)].pdf 2022-07-18
16 201841036538-COMPLETE SPECIFICATION [18-07-2022(online)].pdf 2022-07-18
17 201841036538-CLAIMS [18-07-2022(online)].pdf 2022-07-18
18 201841036538-ABSTRACT [18-07-2022(online)].pdf 2022-07-18
19 201841036538-POA [09-10-2024(online)].pdf 2024-10-09
20 201841036538-FORM 13 [09-10-2024(online)].pdf 2024-10-09
21 201841036538-AMENDED DOCUMENTS [09-10-2024(online)].pdf 2024-10-09

Search Strategy

1 SearchHistory(13)E_11-01-2022.pdf